Commits

Anonymous committed effee9c

Committing the first unfinished version of TracGViz 1.3.3 . The next
step should be to test Trac VCS RPC implementation, the data sources
depending upon its completion, and finally, some request handlers
related to gadget borders.

Comments (0)

Files changed (20)

trac-dev/gviz/CHANGES

+
+What's new in version 1.3.3
+---------------------------
+
+- New ! Since version 1.3.3 macro aliases are supported so that 
+  embedding iGoogle gadgets in wiki pages be even easier. Aliases are 
+  names of macros configured using `gadgets.aliases` option which 
+  are shortcuts to use iGoogleGadget with a fixed name. For example, 
+  if the aforementioned option is set to 
+  `MotionChart=http://www.google.com/ig/modules/motionchart.xml`
+  then the following snippets are equivalent (so users wont need to
+  remember gadget URLs anymore but names they are familiar with ;). 
+
+    [[iGoogleGadget(url=http://www.google.com/ig/modules/motionchart.xml, 
+    _table_query_url=http://spreadsheets.google.com/tq?range=B3%3AG17&key=1234, 
+    _table_query_refresh_interval=5)]]
+
+    [[iGoogleGadget(url=gadget:google:modules:motionchart, 
+    _table_query_url=[gviz:google:sheet:1234:B3-G17], 
+    _table_query_refresh_interval=5)]]
+
+    [[MotionChart(_table_query_url=[gviz:google:sheet:1234:B3-G17], 
+    _table_query_refresh_interval=5)]]
+
+- Aliases for Google's visualisation gadgets are offered by default.
+
+- TracLinks expressions are supported in config options used to 
+  define iGoogleGadget macro aliases.
+
+- New ! GVizTicketList now lists the tickets created in a Trac 
+  environment together with their attributes. Custom fields, if 
+  available, are also included in separate columns. That's why the 
+  columns in the data tables returned as well as their order can 
+  change from time to time, depending on the plugins enabled in the 
+  particular environment and other factors.
+
+- New ! ReportRPC now provides an interface to Trac's report module
+  via XML-RPC protocol. So far report execution is only supported for 
+  reports defined by using SQL and TracQuery syntax. This means that 
+  saved custom queries specified using URLs are not supported yet, 
+  but will be (hopefully) in a near future. In the mean time a 
+  `NotImplementedError` exception is raised.
+
+- New ! GVizAvailableReports now lists available reports and their 
+  data, including report ID, title, description and the query string 
+  used to select the tickets to be included in each report.
+
+- Important ! Architectural change. From now on Request objects 
+  can be supplied in to IGVizDataProvider.get_data_schema method. 
+  This change allows to implement more dynamic data providers 
+  since schemas can vary according to the run-time values provided 
+  for some parameters.
+
+- New ! GVizReportProvider now offers the data included in custom Trac 
+  Reports defined by Trac users. This is a very dynamic data 
+  provider, and also the first example where schema can change 
+  depending upon the report specification.
+
+- Added api.IHashLibrary interface so that plugins and extensions can 
+  contribute with other secure hash, message digest and checksum 
+  algorithms in order to compute `sig` field in GViz API responses.
+
+- All the code needed to support secure hash algorithms provided by 
+  `hashlib` module can be found from now on in class `stdhash.HashLib`.
+  It supports the following methods : `sha1`, `sha224`, `sha256`, 
+  `sha384`, `sha512`, and `md5`. Additional algorithms may also be 
+  available depending upon the OpenSSL library that Python uses on 
+  your platform.
+
+- All the code needed to support checksum algorithms provided by 
+  `zlib` module can be found from now on in class `stdhash.ZLibChecksum`.
+  It supports the following methods : `adler32`, `crc32`
+
+- Minor adjustments to adapt support for `sig` parameter to the 
+  recent introduction of `IHashLibrary` interface.
+
+- Bug fixed: Formerly `sig` value was included in GViz API response 
+  only if both, the hash method was setup and the `sig` parameter was 
+  set in the request. From now on only the first condition has to 
+  be met in order to send the hash value back to the client.
+
+- Bug fixed: Minor bug while displaying instances of `datetime` 
+  using plain text encoders.
+
+- Support in `DataTable.SingleValueToJS` for automatic 
+  conversion of instances of `xmlrpclib.DateTime` and `int` values 
+  (POSTFIX timestamp) into instances of `datetime`. This allows to 
+  supply such values in columns having types `date`, `datetime`, and 
+  `timeofday` (not possible with gviz_api 1.0).
+
+- Support in `DataTable.SingleValueToJS` for automatic 
+  conversion of unicode strings using utf-8 encoding. This allows to 
+  supply such strings in columns having `string` type (not possible 
+  with gviz_api 1.0).
+
+- Module `rpc` has been added to group all RPC handlers offered 
+  by `TracGViz` package.
+
+- Documentation added to highlight the use of 
+  timeline.ticket_show_details option in trac.ini in order to 
+  retrieve all events related to ticket changes (e.g. attachments).
+
+- Bug fixed: TimelineRPC doesnt fail when filter definitions
+  provided by instances of ITimelineEventProvider interface contain 
+  a third (i.e. `checked`) element. It also handles unicode 
+  characters appropriately.
+
+- Few optimizations ... bah!
 
 What's new in version 1.3.2
------------------------------
+---------------------------
 
 - TracGVizSystem now implements IPermissionRequestor interface. This 
   means that Trac admins can use `GVIZ_VIEW` permission to control 
   `gviz:google:sheet:<spreadsheet_id>[:[<sheet_name>][:<top_cell>-<bottom_cell>]][?[headers=<number>]]`
 
 - Formerly values in columns having types `date`, `datetime` or 
-  `timeofday` were shown in responses not in JSON format (e.g. HTML) 
+  `timeofday` were shown in non-JSON responses (e.g. HTML) 
   in a way similar to `new Date(2009, 12, 2)`. Now they are rendered 
   using the following formats `%Y-%m-%d`, `%Y-%m-%d %H:%M:%S`, 
   `%H:%M:%S` respectively.
   `WikiFormatting`, `WikiPageNames`, `WikiDeletePage`).
 
 What's new in version 1.3.1
------------------------------
+---------------------------
 
 - Assertions concerning permissions are supported, but not directly.
   They are delegated to the underlying layers on top of which the data 
 What's new in version 1.1.0
 ---------------------------
 
-- Support has been added to implement Google Visualization API data 
+- Support has been added to implement Google Visualisation API data 
   sources by reusing existing Trac XML RPC components.
 
 - Some (but not all) data sources for Trac ticket system are provided.
 ---------------------------
 
 - An architecture is available so as to provide a project's data in
-  the format specified in Google Visualization API protocol 
+  the format specified in Google Visualisation API protocol 
   specification (version 0.5) api.TracGVizSystem.
 
 - Multiple protocol handlers (e.g. for different versions, and 
   protocol evolution) are allowed by implementing the interface
   api.IGVizProtocolHandler. There is native support for version 0.5 of
-  Google Visualization API protocol through GViz_0_5.
+  Google Visualisation API protocol through GViz_0_5.
 
 - It is possible to register new data sources by implementing the 
   interface api.IGVizDataProvider
   (class stdfmt.GVizJsonEncoder), HTML (class stdfmt.GVizHtmlEncoder),
   CSV (class stdfmt.GVizCSVEncoder).
 
-- The exception conditions mentioned in Google Visualization API
+- The exception conditions mentioned in Google Visualisation API
   protocol specification (version 0.5) have been identified.

trac-dev/gviz/README

 
-= Trac integration with Google Visualization API =
+= Trac integration with Google Visualisation API =
 
-This plugin has been developped in order to expose the data managed by [http://trac.edgewall.org/ Trac] to widgets implemented using [http://code.google.com/apis/visualization Google Visualization API]. The following data is provided so far in the form of [http://code.google.com/apis/visualization data tables]:
+This plugin has been developped in order to expose the data managed by [http://trac.edgewall.org/ Trac] to widgets implemented using [http://code.google.com/apis/visualization Google Visualisation API]. The following data is provided so far in the form of [http://code.google.com/apis/visualization data tables]:
 
   - Project components.
   - Project milestones.
   
   - The [http://code.google.com/apis/visualization/documentation/dev/gviz_api_lib.html Data Source Python Library]
     must be installed so that the outputs be consistent with
-    [http://code.google.com/apis/visualization/ Google Visualization API] [http://code.google.com/apis/visualization/documentation/dev/implementing_data_source.html data source protocol specification], and widgets can actually render the data managed by [http://trac.edgewall.org/ Trac] environments.
+    [http://code.google.com/apis/visualization/ Google Visualisation API] [http://code.google.com/apis/visualization/documentation/dev/implementing_data_source.html data source protocol specification], and widgets can actually render the data managed by [http://trac.edgewall.org/ Trac] environments.
 
 == Installation ==
 
     - '''Volunteer''' a little, [mailto://flioops.project@gmail.com?subject=TracGVizPlugin%20Patch submit patches], it will be great if many others contribute as well ... don't hesitate [mailto://flioops.project@gmail.com?subject=TracGVizPlugin%20add_member be part of the team].
     - Donate to this project (link available hopefully soon), so as to be able to leave everything else behind and fulfill your particular request. '''Time''' is more than '''gold'''.
   - We try to do everything so as to not fail if unknown properties are sent as part of the request. However, if you discover any bug, '''please''' [query:status=new|assigned|reopened|closed&component=plugin_trac_gviz find out related tickets]. If none is found then [/newticket?component=plugin_trac_gviz create a new ticket]. 
-  - Conversely, built-in protocol handlers parse only the properties that they expect. If new versions introduce new properties, then the system uses the handler for the latest available version, and tries to send a response back to the client. [code.google.com/apis/visualization/documentation/querylanguage.html Google Visualization API query language] [ticket:xxx is not supported yet], but hopefully soon.
+  - Conversely, built-in protocol handlers parse only the properties that they expect. If new versions introduce new properties, then the system uses the handler for the latest available version, and tries to send a response back to the client. [code.google.com/apis/visualization/documentation/querylanguage.html Google Visualisation API query language] [ticket:xxx is not supported yet], but hopefully soon.
   - Some data sources '''may''' accept custom parameters for custom visualizations. However, they '''always''' return responses in the standard format.
   - Since third-party sites might be willing to use the data managed by your [http://trac.edgewall.org/ Trac] instance, it is very important to [ticket:xxx document the data source requirements] carefully. Due to the infinite possibilities offered by the plugin and [http://trac.edgewall.org/ Trac] architectures, and also since plugin developpers might want to implement their own data sources, the only feasible way to manage all this complexity is to automate documentation tasks. This feature is not ready, but will include (... in a near future ...) the following: 
     - Any custom parameters that you accept,
-    - Whether or not [code.google.com/apis/visualization/documentation/querylanguage.html Google Visualization API query language] is supported, and ...
+    - Whether or not [code.google.com/apis/visualization/documentation/querylanguage.html Google Visualisation API query language] is supported, and ...
     - What kind of data is returned, and the structure of that data (what the rows and columns represent, and any labeling).
   - Integration with the [TracPermissions permissions system] has not being finished ... [ticket:xxx yet]. Therefore no standard security precautions have been taken, and the site may accept requests from unknown clients. Take this into consideration if you store any sensitive data.
   - All request and response strings '''should''' be UTF-8 encoded. Else, [/newticket?component=plugin_trac_gvi let us know].

trac-dev/gviz/TODO

 Outstanding tasks
 -----------------
 
-. Add an RPC handler to provide data about source code in VCS.
++ Add a page to view the images in a gadget border.
 
-. Add some GViz providers to offer the data managed by VCS.
++ Add a page to list available borders.
+
++ Provide default border images for gadgets.
+
++ Remove unnecessary imports.
+
++ Add an RPC handler to provide information about source code in VCS.
+
++ Add GViz provider to offer the data about all files located 
+  in a given folder managed by VCS (PARAMS: path, recursive, 
+  revision, depth, filter COLS: fnm, sz, lastRev, modified, 
+  ext, mime-type, log).
+
++ Add GViz provider to offer historical data about files in 
+  locations managed by VCS (PARAMS: path, revStart, revEnd, 
+  dateStart, dateEnd, COLS: fnm, sz, rev, date, ext, mime-type, log).
+
++ Add GViz provider to offer changes in a changeset.
+
++ Test GViz providers for VCS (`svn` + `hg` + `git` ? ).
+
++ Include all the names of Google's Visualisation Gadgets in 
+  GadgetAliases.GOOGLE_GADGETS mapping.
+
++ Fix MIME types returned by MoinMoin and reStructuredText encoders.
+
+- Add `limit` parameter for methods in VersionSourceRPC.
 
 - Implement data source provider for Ticket->Query
 
-+ Retrieve events related to ticket attachments for Timeline->Events
-
-+ Implement data source provider for Ticket->Info
+- Implement `create`, `update`, and `delete` methods in ReportRPC
 
 - Add an option in GViz milestones in order to only show data about 
   active milestones.
 
-- Cache the requests and optimize the call/response life cycle by 
-  using the `reqId` parameter.
-
-+ Remove unnecessary imports.
+- Cache the requests and optimize the request/response cycle by using 
+  the `reqId` parameter.
 
 - Implement various gadget border providers.
 
 
 - Implement context navigation contributors to Trac Gadgets area.
 
-+ Add a page to view the images in a gadget border.
-
-+ Add a page to list available borders.
-
 - Add support for CSS-based and external (HTTP) gadget borders.
 
 - Separate GVizSystem and GVizModule ?
 
-. Provide default border images for gadgets.
-
-+ Provide an interface so that plugins and extensions can 
-  contribute with other checksum algorithms.
-
-+ Add checksum algorithms in `zlib` module: adler32, crc32
-
 - Write tests for the different features included in the plugin.
 
 - Add labels to the data sources. bah!
 
 - Include formatted values for dates in JSON result.
 
-+ Fix the issue with selecting changed tickets since timestamp.
+- Fix the issue with selecting changed tickets since timestamp.
 
 - Ensure that columns are always arranged in the same order (i.e. use 
   ordered dicts)
 - GViz API QL: Add tasks to support `from`, `where`, `group by`, 
   `pivot`, `order by`, `format` and `options` clauses.
 
-+ Add support for retrieving table data in reStructuredText (rst).
+- Add support for retrieving table data in reStructuredText (rst) 
+  format.
 
 - Allow data source providers to control the details needed to handle 
   queries.
 
-+ Fix MIME types returned by MoinMoin and reStructuredText encoders.
+- Put exception types in a separate module.
+
+- Allow to provide documentation to describe dynamic columns (i.e. 
+  those columns that appear in data tables only when some run-time 
+  conditions are met e.g. ticket custom fields in 
+  `ticket.GVizTicketReport`).
+
+- Use Trac i18n for strings displayed to clients.
+
+- Execute reports defined using URLs to saved custom queries.
+

trac-dev/gviz/__init__.py

 #   See the License for the specific language governing permissions and
 #   limitations under the License.
 
-"""Trac Data Sources for Google Visualization API. Embed iGoogle gadgets using WikiFormatting.
+r"""Trac Data Sources for Google Visualisation API. Embed iGoogle gadgets using WikiFormatting.
 
 This package (plugin) provides components so that Trac be able to
 use widgets and related technologies provided by Google.
 
 try:
     from api import TracGVizSystem
+    from rpc import *
+    from stdhash import *
     from proto import GViz_0_5
     from stdfmt import GVizJsonEncoder, GVizHtmlEncoder, GVizCSVEncoder
     from extfmt import *
     from wiki import *
     from search import *
     from timeline import *
+    from vcs import *
     from ig import *
     msg = 'Ok'
 except Exception, exc:

trac-dev/gviz/_gviz_api.py

 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-"""Converts Python data into data for Google Visualization API clients.
+r"""Converts Python data into data for Google Visualization API clients.
 
 This library can be used to create a google.visualization.DataTable usable by
 visualizations built on the Google Visualization API. Output formats are raw
 
 
 class DataTableException(Exception):
-  """The general exception object thrown by DataTable."""
+  r"""The general exception object thrown by DataTable."""
   pass
 
 
 class DataTable(object):
 
-  """Wraps the data to convert to a Google Visualization API DataTable.
+  r"""Wraps the data to convert to a Google Visualization API DataTable.
 
   Create this object, populate it with data, then call one of the ToJS...
   methods to return a string representation of the data in the format described.
   """
 
   def __init__(self, table_description, data=None):
-    """Initialize the data table from a table schema and (optionally) data.
+    r"""Initialize the data table from a table schema and (optionally) data.
 
     See the class documentation for more information on table schema and data
     values.
 
   @staticmethod
   def SingleValueToJS(value, value_type):
-    """Translates a single value and type into a JS value.
+    r"""Translates a single value and type into a JS value.
 
     Internal helper method.
 
 
   @staticmethod
   def ColumnTypeParser(description):
-    """Parses a single column description. Internal helper method.
+    r"""Parses a single column description. Internal helper method.
 
     Args:
       description: a column description in the possible formats:
 
   @staticmethod
   def TableDescriptionParser(table_description, depth=0):
-    """Parses the table_description object for internal use.
+    r"""Parses the table_description object for internal use.
 
     Parses the user-submitted table description into an internal format used
     by the Python DataTable class. Returns the flat list of parsed columns.
 
   @property
   def columns(self):
-    """Returns the parsed table description."""
+    r"""Returns the parsed table description."""
     return self.__columns
 
   def NumberOfRows(self):
-    """Returns the number of rows in the current data stored in the table."""
+    r"""Returns the number of rows in the current data stored in the table."""
     return len(self.__data)
 
   def LoadData(self, data):
-    """Loads new data to the data table, clearing existing data."""
+    r"""Loads new data to the data table, clearing existing data."""
     self.__data = []
     self.AppendData(data)
 
   def AppendData(self, data):
-    """Appends new data to the table.
+    r"""Appends new data to the table.
 
     Data is appended in rows. Data must comply with
     the table schema passed in to __init__(). See SingleValueToJS() for a list
       self._InnerAppendData({}, data, 0)
 
   def _InnerAppendData(self, prev_col_values, data, col_index):
-    """Inner function to assist LoadData."""
+    r"""Inner function to assist LoadData."""
     # We first check that col_index has not exceeded the columns size
     if col_index >= len(self.__columns):
       raise DataTableException("The data does not match description, too deep")
         self._InnerAppendData(col_values, data[key], col_index + 1)
 
   def _PreparedData(self, sort_keys=()):
-    """Prepares the data for enumeration - sorting it by sort_keys.
+    r"""Prepares the data for enumeration - sorting it by sort_keys.
 
     Args:
       sort_keys: list of keys to sort by. Receives a single key to sort by, or
                                  "'asc' or 'desc'")
 
     def SortCmpFunc(row1, row2):
-      """cmp function for sorted. Compares by keys and 'asc'/'desc' keywords."""
+      r"""cmp function for sorted. Compares by keys and 'asc'/'desc' keywords."""
       for key, asc_mult in proper_sort_keys:
         cmp_result = asc_mult * cmp(row1.get(key), row2.get(key))
         if cmp_result:
     return sorted(self.__data, cmp=SortCmpFunc)
 
   def ToJSCode(self, name, columns_order=None, order_by=()):
-    """Writes the data table as a JS code string.
+    r"""Writes the data table as a JS code string.
 
     This method writes a string of JS code that can be run to
     generate a DataTable with the specified data. Typically used for debugging
     return jscode
 
   def ToJSon(self, columns_order=None, order_by=()):
-    """Writes a JSON strong that can be used in a JS DataTable constructor.
+    r"""Writes a JSON strong that can be used in a JS DataTable constructor.
 
     This method writes a JSON string that can be passed directly into a Google
     Visualization API DataTable constructor. Use this output if you are
     return json
 
   def ToJSonResponse(self, columns_order=None, order_by=(), req_id=0):
-    """Writes a table as a JSON response that can be returned as-is to a client.
+    r"""Writes a table as a JSON response that can be returned as-is to a client.
 
     This method writes a JSON response to return to a client in response to a
     Google Visualization API query. This string can be processed by the calling

trac-dev/gviz/api.py

 #   See the License for the specific language governing permissions and
 #   limitations under the License.
 
-r"""Trac Data Source able to feed widgets implemented with Google Visualization API.
+r"""Trac Data Source able to feed widgets implemented with Google Visualisation API.
 
 Components allowing Trac to export a project (environment)'s data so
 that different widgets based on Google Visualization (GViz) API
 from trac.env import IEnvironmentSetupParticipant
 from trac.perm import IPermissionRequestor, PermissionError
 from trac.util import get_pkginfo
+from trac.util.datefmt import utc
 from trac.web.api import IRequestHandler, RequestDone
 from pkg_resources import resource_string, resource_filename
 from trac.web.chrome import ITemplateProvider
     """
 
 class GVizBadRequest(GVizException):
-    r"""Exception raised to denote that an unsopported feature has been
-    requested by the client.
+    r"""Exception raised to denote that the client request contains 
+    wrong fields and/or values, or that required data is missing.
     """
 
 class GVizUnknownProvider(GVizException):
     r"""Exception raised to denote that there is no provider able to
-    to handle a given request.
+    to handle the client request.
     """
 
 class GVizNotAuthenticatedError(GVizException):
     processing an anonymous request.
     """
 
+class GVizDataNotModifiedError(GVizException):
+    r"""Exception raised when the hash generated from the data
+    returned by the GViz provider matches the hash sent by the 
+    client.
+    """
+    message = ''
+
+class GVizInvalidConfig(GVizException):
+    r"""Exception raised when an invalid, incorrect or unsupported 
+    value has been assigned to a configuration option.
+    """
+
 def gviz_col(col_id, col_doc):
     r"""This function can be used to document the meaning of the 
     different columns in the table returned by GViz data sources as 
         Note : regex in path handles are not supported yet.
         """
     
-    def get_data_schema():
+    def get_data_schema(req=None):
         r"""Provide the schema used to populate GViz data tables out 
         of the Python object containing the table contents.
         
-        Return an iterator of (permission, schema). See the
-        documentation for `gviz_api.DataTable` class for more details
-        about the `scheme` field.
+        @req    (since version 1.3.3) an optional argument that can 
+                be used by data providers offering data in different 
+                ways depending upon the values provided at run-time 
+                for some parameters. If this function has more than 
+                a single argument then it *MUST* be the request 
+                being processed.
+        @return schema definition used to prepare the data table. See 
+                the documentation for `gviz_api.DataTable` class for 
+                more details about schemas.
         """
     
     def get_data(req, tq, **tqx):
         if value_type in ["date", "datetime", "timeofday"] and \
                 isinstance(value, DateTime):
             value = datetime.strptime(value.value, '%Y%m%dT%H:%M:%S')
+        if value_type in ["date", "datetime", "timeofday"] and \
+                isinstance(value, int):
+            value = datetime.fromtimestamp(int(value or 0), utc)
         elif value_type == "string" and isinstance(value, unicode):
             value = value.encode('utf-8', 'replace')
         return gviz_api.DataTable.SingleValueToJS(value, value_type)
         
         - If the request is addressed to a specific data source then
           send back the corresponding data table's contents in one of
-          the formats supported by Google Visualization API (defaults
+          the formats supported by Google Visualisation API (defaults
           to JSON Response Format).
         
         - If the request is addressed directly to the `root path` (i.e. 
                 table = None
                 try:
                     data = provider.get_data(req, tq, **params)
-                    table = DataTable(provider.get_data_schema(), data)
+                    sch = provider.get_data_schema.im_func.func_code
+                    sch_args = (sch.co_argcount > 1) and (req,) or ()
+                    table = DataTable(provider.get_data_schema(*sch_args), \
+                                        data)
                     handler.output_response(req, table, None, \
                                             None, **std_params)
                 except RequestDone:
                     raise
                 except Exception, exc:
+                    self.log.exception("IG: Handling exception")
                     handler.output_response(req, table, 
                             sys.exc_info()[1:], exc, **std_params)
                     raise RequestDone()
         encoder.
         """
 
-class GVizDataNotModifiedError(Exception):
-    r"""Exception raised when the hash generated from the data
-    returned by the GViz provider matches the hash sent by the 
-    client.
-    """
-    message = ''
-
 class ITracLinksHandler(Interface):
     r"""Interface implemented by those classes being
     TracLinks providers registered under the namespaces managed by
         The `label` is already HTML escaped, whereas the `target` is not.
         """
 
+class IHashLibrary(Interface):
+  r"""Interface implemented by all those components defining one or 
+  more secure hash, message digest, or checksum algorithms to be used 
+  by other components.
+  """
+  def get_hash_properties(method_name):
+    r"""Determine whether this component supports a secure hash 
+    algorithm or not.
+    
+    @param method_name  is the name identifying the hash method
+                        (e.g. sha1, md5, ...) in lowercase letters
+    @return             `None` if the component does not support the 
+                        requested algorithm. Otherwise, a tuple of 
+                        the form (priority, source) where
+                        
+                        priority: is a number indicating how much 
+                                  relevant is this implementation. 
+                                  If multiple components support the 
+                                  same method then the one specifying 
+                                  the higher `priority` value will 
+                                  be chosen.
+                        source:   is a numeric constant identifying 
+                                  the library providing the 
+                                  implementation for this method. The 
+                                  following values have been defined 
+                                  so far:
+                                  
+                                  0   - implemented by the OpenSSL 
+                                        library that Python uses on 
+                                        your platform. 
+                                  100 - implemented in Python stdlib
+                                  200 - implemented in a Python module 
+                                        (or C extension, ...) but not 
+                                        in stdlib
+    """
+  def new_hash_obj(method_name, init_data):
+    r"""Create a hash object.
+    
+    @param method_name  is the name identifying the hash method
+                        (e.g. SHA1, MD5, ...)
+    @param init_data    data processed since the beginning
+    @return             a hash object implementing the aforementioned 
+                        algorithm by means of the same 
+                        simple interface defined in `hashlib` module.
+    """
 
 # TODO : Implement the different data source providers
 

trac-dev/gviz/ig/__init__.py

-
-# Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-"""This package contains many components and tools useful to 
-integrate features of iGoogle gadgets with Trac.
-
+
+# Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+r"""This package contains many components and tools useful to 
+integrate features of iGoogle gadgets with Trac.
+
 Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
 Licensed under the Apache License, Version 2.0 
-"""
-
-from api import *
-from db import *
-from web_ui import *
-from wiki import *
+"""
+
+from api import *
+from db import *
+from web_ui import *
+from wiki import *

trac-dev/gviz/ig/db.py

 #   See the License for the specific language governing permissions and
 #   limitations under the License.
 
-"""This module contains some Trac components so as to enhance Trac 
+r"""This module contains some Trac components so as to enhance Trac 
 users experience, and to ease some common tasks related to gadgets 
 and embedding them in wiki pages.
 
 from api import IGadgetBorderImgStore, BORDER_IMAGES
 
 class DefaultBordersRepository(Component):
-    """A repository that stores gadget border images under the 
+    r"""A repository that stores gadget border images under the 
     environment's `htdocs/gadgets/borders` folder.
     
     Images are stored in a directory inside the environment's 

trac-dev/gviz/ig/web_ui.py

 #   See the License for the specific language governing permissions and
 #   limitations under the License.
 
-"""This module contains some Trac components so as to enhance Trac 
+r"""This module contains some Trac components so as to enhance Trac 
 users experience, and to ease some common tasks related to gadgets 
 and embedding them in wiki pages.
 
 
 from trac.core import Interface, Component, ExtensionPoint, \
         implements, TracError
-from trac.config import Option
+from trac.config import Option, ExtensionOption
 from trac.util import get_pkginfo
 from trac.web.api import IRequestHandler, RequestDone
 from pkg_resources import resource_string, resource_filename
     from util import send_response, send_std_error_response
 
 class GadgetsBorderModule(Component):
-    """Upload pictures so that Trac be able to host them 
+    r"""Upload pictures so that Trac be able to host them 
     for later use in image-based gadget borders. Besides ensure 
     that the images are suitable for this particular purpose (e.g. 
     that dimensions are according to the specifications).
     implements(# IPermissionRequestor, \
             IRequestHandler, ITemplateProvider, IGadgetBorderProvider, \
             ITracLinksHandler)
-    stores = ExtensionPoint(IGadgetBorderImgStore)
     
-    store_name = Option('gadgets', 'border_img_store', \
+    store = ExtensionOption('gadgets', 'border_img_store', \
+                        IGadgetBorderImgStore,
                         default='DefaultBordersRepository', \
-                        doc='The name of the component used to store'
+                        doc='The name of the component used to store '
                             'gadget border images.')
     
     def __init__(self):
-        self.store = None
         self.log.debug('IG: Setting up gadget border store')
-        for store in self.stores:
-            if store.__class__.__name__ == self.store_name:
-                self.store = store
-                self.log.debug('IG: Gadget border store %s', self.store)
-                break
-        if not self.store:
-            raise TracError('Missing border store')
+        try:
+          _ = self.store
+        except AttributeError:
+          self.log.error('IG: Impossible to load gadget border store')
+          raise TracError('Impossible to load gadget border store')
+        else:
+          self.log.info('IG: Gadget border store %s', self.store)
     
     # IPermissionRequestor methods
     

trac-dev/gviz/ig/wiki.py

 with other component elements.
 """
 
-__all__ = 'iGoogleGadgetMacro',
+__all__ = 'iGoogleGadgetMacro', 'GoogleVizGadgets', 'GadgetAliases'
 
 from trac.core import Interface, Component, ExtensionPoint, \
         implements, TracError
-from trac.config import Option
+from trac.config import Option, ListOption
+from trac.env import Environment
 from trac.mimeview.api import Context
 from trac.util import get_pkginfo
 from trac.web.api import IRequestHandler, RequestDone
 from trac.perm import IPermissionRequestor
 from trac.web.api import IRequestHandler
 from trac.web.chrome import ITemplateProvider, add_stylesheet
-from trac.wiki.api import IWikiSyntaxProvider, parse_args
+from trac.wiki.api import IWikiSyntaxProvider, parse_args, \
+                          IWikiMacroProvider
 from trac.wiki.macros import WikiMacroBase
 from trac.wiki.formatter import Formatter
 
 from os.path import join, exists
 from os import makedirs
 from itertools import chain
-from urlparse import urlunparse
+from string import strip
+from urlparse import urlunparse, urlparse
 from urllib import urlencode
 
 from tracgviz.api import ITracLinksHandler
+from tracgviz.util import dummy_request
 
 __metaclass__ = type
 
 class TracLinksFormatter(Formatter):
-    """A wiki formatter which pays attention to nothing but trac 
+    r"""A wiki formatter which pays attention to nothing but trac 
     links.
     """
     for fmt_name in (x for x in Formatter.__dict__ \
     del fmt_name
 
 class FirstTracLinkFormatter(TracLinksFormatter):
-    """Gather the first TracLink found.
+    r"""Gather the first TracLink found.
     """
     def __init__(self, env, context):
         super(FirstTracLinkFormatter, self).__init__(env, context)
         """
         # FIXED: Generate absoulte links
         # req, env, ctx = formatter.req, formatter.env, formatter.context
-        req, env = formatter.req, formatter.env
+        if isinstance(formatter, Environment):
+          env = formatter
+          req = dummy_request(env)
+        else:
+          req, env = formatter.req, formatter.env
         ctx = Context.from_request(req, absurls=True)
         abs_ref, href = (req or env).abs_href, (req or env).href
         lf = FirstTracLinkFormatter(env, ctx)
                         " solid black|0px,1px black"
             )
     
-    def expand_macro(self, formatter, name, contents):
-        _, args = parse_args(contents)
-        self.log.debug('IG: iGoogleGadgetMacro -> args = %s', args)
+    def do_expand_macro(self, formatter, args):
+        r"""Return the script tag needed to embed the iGoogle gadget 
+        considering the arguments supplied in the call to the macro.
+        """
         url = self.resolve_link_url(formatter, args.get('url', ''))
         if not url:
             raise TracError("Unknown gadget : 'url' argument " \
         
         self.log.debug("IG: Gadget preferences %s", gadget_params)
         return Markup('<script src="%s"></script>' % (url,))
+    
+    def expand_macro(self, formatter, name, contents):
+        r"""Expand iGoogleGadget macro.
+        """
+        _, args = parse_args(contents)
+        self.log.debug('IG: iGoogleGadgetMacro -> args = %s', args)
+        return self.do_expand_macro(formatter, args)
+
+GOOGLE_MODULES_URL = 'http://www.google.com/ig/modules/%s.xml'
+
+class GoogleVizGadgets(Component):
+    r"""Provide shortcuts based on WikiFormatting so as to specify 
+    URLs for (standard) Google Visualization Gadgets.
+    
+    Syntax -> 
+    gadget:google:modules:<chart_name>
+    
+    Example(s) ->
+    - gadget:google:modules:line-chart
+      http://www.google.com/ig/modules/line-chart.xml
+    """
+    implements(ITracLinksHandler)
+    
+    # ITracLinksHandler methods
+    def _format_link(self, formatter, ns, viz_name, label):
+        title = "Google's Official Visualization: %s" % (viz_name,)
+        url = GOOGLE_MODULES_URL % (viz_name,)
+        return formatter._make_ext_link(url, label, title)
+    
+    def get_link_resolvers(self):
+        r"""Return an iterable over (namespace, formatter) tuples.
+
+        Each formatter should be a function of the form
+        fmt(formatter, ns, target, label), and should
+        return some HTML fragment.
+        The `label` is already HTML escaped, whereas the `target` is not.
+        """
+        yield (('gadget', 'google', 'modules'), self._format_link)
+
+class GadgetAliases(Component):
+  r"""A class allowing to write aliases for favorite gadgets in wiki 
+  pages, so it's quite easy to remember how to embed it. For further 
+  details read the documentation for option `gadgets.aliases`.
+  """
+  implements(IWikiMacroProvider)
+  
+  GOOGLE_GADGETS = [
+                      ('MotionChart',   'motionchart'),
+                      ('LineChart',     'line-chart'),
+                    ]
+  aliases = ListOption('gadgets', 'aliases', \
+              default=','.join('='.join([nm, GOOGLE_MODULES_URL % (gvid,)]) \
+                                for nm, gvid in GOOGLE_GADGETS), \
+              keep_empty=False,
+              doc="""A comma separated list of macro name followed """
+                  """by one gadget URL. Both values are separated """
+                  """using the equal sign (=). If a macro with this """
+                  """name is written anywhere WikiFormatting is """
+                  """accepted then the gadget hosted at URL will be """
+                  """embedded (e.g. `[[MotionChart(...)]]` is """
+                  """actually equivalent to """
+                  """`[[iGoogleGadget(http://www.google.com/ig/"""
+                  """modules/motionchart.xml, ...)]]` provided that """
+                  """`MotionChart=http://www.google.com/ig/"""
+                  """modules/motionchart.xml` be included in this """
+                  """option).""")
+  
+  def __init__(self):
+    try:
+      self.base_macro = iGoogleGadgetMacro(self.env)
+    except:
+      self.log.exception("Error loading iGoogleGadgetMacro. Is it enabled?")
+      raise TracError("Error loading iGoogleGadgetMacro. Is it enabled?")
+    try:
+      def cfg(nm, val):
+        return nm.strip(), \
+                self.base_macro.resolve_link_url(self.env, val.strip())
+      
+      self.macro_map = dict(cfg(*x.split('=', 1)) for x in self.aliases)
+    except:
+      self.log.exception("Invalid value in option gadgets.aliases")
+#      raise TracError("Invalid value in option gadgets.aliases")
+      raise ValueError("Full Value: %s " % (self.aliases,))
+  
+  # IWikiMacroProvider methods
+  def get_macros(self):
+      r"""Return the macro names configured in `gadgets.aliases`.
+      """
+      return self.macro_map.iterkeys()
+  
+  MACRO_DESC = r"""%(macro)s macro is a shorcut used to embed the 
+      gadget hosted at %(url)s. In this case the following wiki 
+      expressions are equivalent.
+      
+      {{{
+      [[%(macro)s(arg1=value1,arg2=value2)]]
+      [[iGoogleGadget(url=%(url)s,arg1=value1,arg2=value2)]]
+      }}}
+      """
+  
+  def get_macro_description(self, name):
+      r"""Return a description of the macro with the specified name.
+      """
+      try:
+        url = self.macro_map[name]
+      except KeyError:
+        return ""
+      else:
+        return self.MACRO_DESC % dict(macro=name, url=url)
+  
+  def render_macro(self, req, name, content):
+      """Deprecated."""
+      raise NotImplementedError("Only Trac>=0.11 is supported")
+  
+  def expand_macro(self, formatter, name, content):
+      r"""Called by the formatter when rendering the parsed wiki text.
+
+      (since 0.11)
+      """
+      try:
+        url = self.macro_map[name]
+      except KeyError:
+        raise TracError("Unknown alias for iGoogleGadget '%s'" % (name,))
+      else:
+        _, args = parse_args(content)
+        try:
+          _ = args['url']
+          raise TracError("Argument 'url' cannot be specified.")
+        except KeyError:
+          args['url'] = url
+          return self.base_macro.do_expand_macro(formatter, args)

trac-dev/gviz/proto.py

 #   limitations under the License.
 
 r"""Protocol handlers for the different versions of Google 
-Visualization API.
+Visualisation API.
 
 Supported versions : 0.5
 
 from trac.config import Option
 from trac.env import IEnvironmentSetupParticipant
 from trac.perm import PermissionError
+from trac.ticket.query import QuerySyntaxError
 from trac.util import get_pkginfo
 from trac.web.api import IRequestHandler
 from pkg_resources import resource_string, resource_filename
 # GViz-specific exceptions
 from api import GVizDataNotModifiedError, GVizNotSupported, \
                 GVizUnknownProvider, GVizDataNotModifiedError, \
-                GVizBadRequest, GVizNotAuthenticatedError
+                GVizBadRequest, GVizNotAuthenticatedError, \
+                GVizInvalidConfig
 
 JSON_MIME_TYPE = 'text/plain'
 
 class GViz_0_5(BaseGVizHandler):
-    r"""Implementation of the Google Visualization API data source
+    r"""Implementation of the Google Visualisation API data source
     protocol (version 0.5)
     """
     # implements(IGvizProtocolHandler)
                             ['access_denied', 'Access Denied'],
                     GVizNotSupported : \
                             ['not_supported', 'Not supported'],
+                    NotImplementedError : \
+                            ['not_supported', 'Not implemented'],
+                    QuerySyntaxError : \
+                            ['internal_error', 'Syntax error in ' \
+                                                'Trac query'],
                     GVizUnknownProvider : \
                             ['unknown_data_source_id', 'Not Found'],
                     GVizBadRequest : \
-                            ['invalid_request', 'Bad request']
+                            ['invalid_request', 'Bad request'],
+                    GVizInvalidConfig : \
+                            ['internal_error', 'Invalid configuration'],
                    }
     
     def _error(self, req, error, reqId, version, responseHandler):
         if e is not None:
             contents = e.stream_contents(_table)
             if out == 'json':
-                if sig is not None and self.hash_obj is not None:
-                    hash_obj = self.hash_obj.copy()
-                    hash_obj.update(contents)
-                    hash_str = hash_obj.hexdigest()
-                    if hash_str == sig:
-                        exc = GVizDataNotModifiedError('')
-                        self._error(_req, exc, reqId, version, responseHandler)
-                    hash_str = "'sig':'%s'," % (hash_str,)
+                if self.hash_obj is not None:
+                  hash_obj = self.hash_obj.copy()
+                  hash_obj.update(contents)
+                  hash_str = hash_obj.hexdigest()
+                  if sig is not None and hash_str == sig:
+                    exc = GVizDataNotModifiedError('')
+                    self._error(_req, exc, reqId, version, responseHandler)
+                  hash_str = "'sig':'%s'," % (hash_str,)
                 else:
                     hash_str = ""
                 contents = "%(h)s({'version':'%(v)s', " \

trac-dev/gviz/rpc.py

+#!/usr/bin/env python
+# -*- coding: UTF-8 -*-
+
+# Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+r"""RPC handlers not included in TracXmlRpcPlugin and used to 
+implement some data providers supporting Google Visualisation API
+protocol.
+
+Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
+Licensed under the Apache License, Version 2.0 
+"""
+
+__all__ = 'TimelineRPC', 'ReportRPC', 'VersionControlRPC'
+
+from trac.core import implements, ExtensionPoint, Component
+from trac.mimeview.api import Mimeview, Context
+from trac.ticket.report import ReportModule
+from trac.ticket.query import Query, QuerySyntaxError
+from trac.timeline.api import ITimelineEventProvider
+from trac.timeline.web_ui import TimelineModule
+from trac.util.datefmt import _epoc
+from trac.util.translation import _
+from trac.versioncontrol.api import RepositoryManager, NoSuchNode, \
+                                    NoSuchChangeset
+from trac.versioncontrol.web_ui.browser import CHUNK_SIZE 
+from trac.web.href import Href
+
+from datetime import datetime
+from fnmatch import fnmatch
+from itertools import imap, repeat
+from os.path import splitext
+from tracrpc.api import IXMLRPCHandler
+import types
+from urlparse import urlunparse, urlparse
+import xmlrpclib
+
+from util import get_column_desc, rpc_to_datetime, rpc_opt_sigs
+
+__metaclass__ = type
+
+#--------------------------------------------------
+#   Event Timeline RPC
+#--------------------------------------------------
+
+class TimelineRPC(Component):
+    r""" An interface to Trac's timeline module.
+    """ 
+    implements(IXMLRPCHandler)
+    sources = ExtensionPoint(ITimelineEventProvider)
+
+    def __init__(self):
+#        self._module = TimelineModule(self.env)
+        self._event_data = TimelineModule(self.env)._event_data
+    
+    # IXMLRPCHandler methods
+    def xmlrpc_namespace(self):
+        return 'timeline'
+    
+    def xmlrpc_methods(self):
+        yield ('TIMELINE_VIEW', 
+                ((list, xmlrpclib.DateTime, xmlrpclib.DateTime, list),
+                 (list, xmlrpclib.DateTime, xmlrpclib.DateTime), 
+                 (list, xmlrpclib.DateTime), 
+                 (list, )), 
+                 self.getEvents)
+        yield ('TIMELINE_VIEW', ((list,),), self.getEventFilters)
+
+    STMT = "(kind, author, date, dateuid, " \
+            "unicode(render('url', ctx)).encode('utf-8', 'replace'), " \
+            "unicode(render('title', ctx)).encode('utf-8', 'replace'), " \
+            "unicode(render('description', ctx)).encode('utf-8', 'replace'))"
+    STMT = compile(STMT, '<string>', 'eval')
+    
+    # Exported methods
+    def getEvents(self, req, start=None, stop=None, filters=None):
+        r"""Retrieve events taking place in a time window.
+        
+        @param start    initial date in time interval
+        @param stop     last date in time interval
+        @param filter   is a list of the enabled filters, 
+                        each item being the name (first item) in the 
+                        tuples returned by 
+                        `ITimelineEventProvider.get_timeline_filters`. 
+                        If none is specified then all available 
+                        filters will be used.
+        @return         a list of events in the time range given by 
+                        the `start` and `stop` parameters. Each item 
+                        is represented as a tuple of the form 
+                        (kind, author, date, timestamp, url, title, desc) 
+                        representing information about each event.
+        
+        
+        Note: In order to retrieve all events related to ticket 
+              changes (e.g. attachments) you need to set
+              timeline.ticket_show_details option in trac.ini to true.
+        """
+        if start is None:
+            start = _epoc
+        else:
+            start = rpc_to_datetime(start, req)
+        if stop is None:
+            stop = datetime.now(req.tz)
+        else:
+            stop = rpc_to_datetime(stop, req)
+        if filters is None:
+            filters = list(f for f, _ in self.getEventFilters(req))
+        ctx = Context.from_request(req, absurls=True)
+        globs = dict(ctx=ctx)
+        return list(eval(self.STMT, globs, self._event_data(p, e)) \
+                                for p in self.sources \
+                                for e in p.get_timeline_events(req, \
+                                            start, stop, filters))
+    
+    def getEventFilters(self, req):
+        r"""Return a list of the filters available to retrieve events 
+        provided  by the timeline module. The data returned for each 
+        filter is a binary tuple containing the filter 
+        name as well as its display name.
+        """
+        fdata = dict(fi[:2] for p in self.sources \
+                        for fi in p.get_timeline_filters(req))
+        return fdata.iteritems()
+
+#--------------------------------------------------
+#   Ticket Report RPC
+#--------------------------------------------------
+
+class ReportRPC(Component):
+    r""" An interface to Trac's report module.
+    """ 
+    implements(IXMLRPCHandler)
+    
+    def __init__(self):
+      self.repmdl = ReportModule(self.env)
+    
+    # IXMLRPCHandler methods
+    def xmlrpc_namespace(self):
+        return 'report'
+    
+    def xmlrpc_methods(self):
+        yield ('REPORT_VIEW', ((list,),), self.getAll)
+        yield ('REPORT_VIEW', ((dict, int),), self.get)
+        yield ('REPORT_VIEW', ((dict, int),), self.execute)
+        yield ('REPORT_VIEW', ((list, int),), self.enum_columns)
+
+    # Exported methods
+    def getAll(self, req):
+        r"""Return a list containing the IDs of all available reports.
+        """
+        # Copycat ! I really did nothing
+        db = self.env.get_db_cnx()
+        sql = ("SELECT id FROM report ORDER BY id")
+        cursor = db.cursor()
+        try:
+          cursor.execute(sql)
+          result = cursor.fetchall() or []
+          return (x[0] for x in result)
+        finally:
+          cursor.close()
+    
+    def get(self, req, id):
+        r"""Return information about an specific report as a dict 
+        containing the following fields.
+        
+        - id :            the report ID.
+        - title:          the report title.
+        - description:    the report description.
+        - query:          the query string used to select the tickets 
+                          to be included in this report
+        """
+        sql = "SELECT id,title,query,description from report " \
+                 "WHERE id=%s" % (id,)
+        db = self.env.get_db_cnx()
+        cursor = db.cursor()
+        try:
+          cursor.execute(sql)
+          for report_info in cursor:
+              return dict(zip(['id','title','query','description'], report_info))
+          else:
+              return None
+        finally:
+          cursor.close()
+    
+    def _execute_sql(self, req, id, sql, limit=0):
+        r"""Execute a SQL report and return no more than `limit` rows 
+        (or all rows if limit == 0).
+        """
+        repmdl = self.repmdl
+        db = self.env.get_db_cnx()
+        try:
+          args = repmdl.get_var_args(req)
+        except ValueError,e:
+          raise ValueError(_('Report failed: %(error)s', error=e))
+        try:
+            try:
+              # Paginated exec (>=0.11)
+              exec_proc = repmdl.execute_paginated_report
+              kwargs = dict(limit=limit)
+            except AttributeError:
+              # Legacy exec (<=0.10)
+              exec_proc = repmdl.execute_report
+              kwargs = {}
+            return exec_proc(req, db, id, sql, args, **kwargs)[:2]
+        except Exception, e:
+            db.rollback()
+            raise 
+    
+    def execute(self, req, id):
+        r"""Execute a Trac report.
+        
+        @param id     the report ID.
+        @return       a list containing the data provided by the 
+                      target report.
+        @throws       NotImplementedError if the report definition 
+                      consists of saved custom query specified 
+                      using a URL.
+        @throws       QuerySyntaxError if the report definition 
+                      consists of a TracQuery containing syntax errors.
+        @throws       Exception in case of detecting any other error.
+        """
+        sql = self.get(req, id)['query']
+        query = ''.join([line.strip() for line in sql.splitlines()])
+        if query and (query[0] == '?' or query.startswith('query:?')):
+          raise NotImplementedError('Saved custom queries specified ' \
+                                  'using URLs are not supported.')
+        elif query.startswith('query:'):
+          query = Query.from_string(self.env, query[6:], report=id)
+          server_url = urlparse(req.base_url)
+          server_href = Href(urlunparse((server_url.scheme, \
+                                        server_url.netloc, \
+                                        '', '', '', '')))
+          def rel2abs(row):
+            """Turn relative value in 'href' into absolute URLs."""
+            self.log.debug('IG: Query Row %s', row)
+            row['href'] = server_href(row['href'])
+            return row
+            
+          return imap(rel2abs, query.execute(req))
+        else:
+          cols, results = self._execute_sql(req, id, sql)
+          return (dict(zip(cols, list(row))) for row in results)
+    
+    def _sql_cursor(self, req, db, id, sql, args, limit=0, offset=0):
+      r"""Retrieve a cursor to access the data returned by a SQL 
+      report.
+      """
+      # Copycat ! ReportModule.execute_paginated_report
+      # I didnt want to but I had no other choice :(
+      repmdl = self.repmdl
+      sql, args = repmdl.sql_sub_vars(sql, args, db)
+      if not sql:
+          raise ValueError(_('Report %(num)s has no SQL query.', num=id))
+      self.log.debug('IG: Executing report with SQL "%s"' % sql)
+      self.log.debug('IG: Request args: %r' % req.args)
+      cursor = db.cursor()
+      
+      # The column name is obtained.
+      get_col_name_sql = 'SELECT * FROM ( ' + sql + ' ) AS tab LIMIT 1'
+      cursor.execute(get_col_name_sql, args)
+      self.env.log.debug("IG: Query SQL(Get col names): " + get_col_name_sql)
+      return cursor
+    
+    def _sql_columns(self, req, id, sql):
+      r"""Retrieve the description of columns returned by a SQL 
+      report.
+      """
+      repmdl = self.repmdl
+      db = self.env.get_db_cnx()
+      try:
+        args = repmdl.get_var_args(req)
+      except ValueError,e:
+        raise ValueError(_('Report failed: %(error)s', error=e))
+      try:
+          cursor = self._sql_cursor(req, db, id, sql, args)
+      except Exception, e:
+          db.rollback()
+          raise 
+      else:
+        self.log.debug('IG: Cursor desc %s', cursor.description)
+        cols = list(get_column_desc(cursor, True))
+        cursor.close()
+        return cols
+    
+    def enum_columns(self, req, id):
+        r"""Retrieve the columns present in a custom report.
+        
+        @param id     the report ID.
+        @return       a list of tuples of the form 
+                      (name, type, [label]).
+        @throws       NotImplementedError if the report definition 
+                      consists of saved custom query specified 
+                      using a URL.
+        @throws       QuerySyntaxError if the report definition 
+                      consists of a TracQuery containing syntax errors.
+        @throws       Exception in case of detecting any other error.
+        """
+        sql = self.get(req, id)['query']
+        query = ''.join([line.strip() for line in sql.splitlines()])
+        if query and (query[0] == '?' or query.startswith('query:?')):
+          raise NotImplementedError('Saved custom queries specified ' \
+                                  'using URLs are not supported.')
+        elif query.startswith('query:'):
+          query = Query.from_string(self.env, query[6:], report=id)
+          fields = query.fields
+          return [(f['name'], 'string', _(f['label'])) for f in fields] + \
+                  [   ('changetime', 'number', _('Modified')), \
+                      ('time', 'number', _('Created')), \
+                      ('href', 'string', _('URL')), \
+                      ('id', 'number', _('Ticket')), \
+                  ]
+        else:
+          return self._sql_columns(req, id, sql)
+
+#--------------------------------------------------
+#   Version Control RPC
+#--------------------------------------------------
+
+def _normalize_timestamp(repos, req, timestamp, default=None):
+  r"""Normalize datetime and revision numbers. Return only 
+  datetime values.
+  """
+  if isinstance(timestamp, datetime):
+    return timestamp
+  elif isinstance(timestamp, xmlrpclib.DateTime):
+    return rpc_to_datetime(timestamp)
+  elif isinstance(default, (datetime, date, time)): # Return default
+    return default
+  else:
+    return datetime.now(req.tz)
+
+def _filter_revs(seq, repos, req, start, stop, full, \
+                  accessor=None):
+  r"""Filter values in seq so that only references to revisions 
+  commited during a time interval be enumerated. 
+  Items are binary tuples of the form `(revision id, changeset object)`.
+  
+  @param seq        original sequence to be filtered.
+  @param repos      the repository managed by VCS.
+  @param req        object containing information about the user 
+                    requesting information in repository and more.
+  @param start      boundary value. Revisions older than 
+                    this value will not be retrieved. Dates 
+                    and revision numbers are both accepted.
+  @param stop       boundary value. Younger revisions 
+                    will not be retrieved. Dates 
+                    and revision numbers are both accepted.
+  @param full       retrieve also the changeset object for revision.
+  @param accesor    a function used to access the revision value 
+                    stored in each item of the input sequence.
+  @return           a sequence of tuples. The firts item is the 
+                    element of the original sequence for which 
+                    revision is in input time interval. The second is 
+                    the changeset object or None (depending upon the 
+                    value of `full` parameter)
+  """
+  if seq is not None:
+    seq = iter(seq)
+  DO_NOTHING, DO_RETURN, DO_YIELD = xrange(3)
+  load_chgset = True
+  if isinstance(start, int) and isinstance(stop, int):
+    load_chgset = False
+    if repos.rev_older_than(start, stop):
+      def cond(rev, chgset):
+        if repos.rev_older_than(rev, start):
+          return DO_RETURN
+        elif repos.rev_older_than(rev, stop):
+          return DO_YIELD
+        else:
+          return DO_NOTHING
+    else:
+      return                      # `start` committed after `stop`
+  elif isinstance(start, int):
+    if stop is None:
+      load_chgset = False
+      def cond(rev, chgset):
+        # Process starts in youngest so there is no need to skip revisions
+        return repos.rev_older_than(rev, start) and DO_RETURN or DO_YIELD
+    else:
+      ts = _normalize_timestamp(repos, req, stop)
+      stop = repos.youngest_rev
+      def cond(rev, chgset):
+        if repos.rev_older_than(rev, start):
+          return DO_RETURN
+        else:
+          if chgset.date < ts:
+            return DO_YIELD
+          else:
+            return DO_NOTHING  
+  elif isinstance(stop, int):
+    if start is None:
+      load_chgset = False
+      def cond(rev, chgset):
+        # Start in `stop` and stop in oldest so no need for DO_NOTHING ;)
+        if chgset.date < ts:
+            return DO_RETURN
+        else:
+          return DO_YIELD
+    else:
+      ts = _normalize_timestamp(repos, req, start, _epoc)
+      start = 0
+      def cond(rev, chgset):
+        # We start from `stop` so no need for DO_NOTHING ;)
+        if chgset.date < ts:
+            return DO_RETURN
+        else:
+          return DO_YIELD
+  else:
+    start_ts = _normalize_timestamp(repos, req, start, _epoc)
+    stop_ts = _normalize_timestamp(repos, req, stop)
+    start, stop = 0, repos.youngest_rev
+    def cond(rev, chgset):
+      ts = chgset.date
+      if ts < start_ts:
+          return DO_RETURN
+      elif ts < stop_ts:
+          return DO_YIELD
+      else:
+          return DO_NOTHING
+  # Search backwards
+  load_chgset = load_chgset or full
+  while true:         # Stops when StopIteration is raised by seq
+    item = seq.next()
+    if accessor:
+      rev = accessor(item)
+    else:
+      rev = item
+    if repos.authz.has_permission_for_changeset(rev):
+      try:
+        chgset = load_chgset and repos.get_changeset(rev) or None
+      except NoSuchChangeset:
+        continue
+      action = cond(rev, chgset)
+      if action == DO_RETURN:
+          return
+      elif action == DO_YIELD:    # Implicit DO_NOTHING
+        if full:
+          yield item, chgset
+        else:
+          yield item, None
+
+class VersionControlRPC(Component):
+    r""" An interface to Trac's Repository and RepositoryManager.
+    """ 
+    implements(IXMLRPCHandler)
+
+    # IXMLRPCHandler methods
+    def xmlrpc_namespace(self):
+        return 'source'
+    
+    def xmlrpc_methods(self):
+        yield ('BROWSER_VIEW', 
+                ((list, list, str, int, bool, int),
+                 (list, list, str, int, bool), 
+                 (list, list, str, int), 
+                 (list, list),), 
+                 self.ls)
+        yield ('BROWSER_VIEW', 
+                ((list, list, int),
+                 (list, list),), 
+                 self.getFileAttributes)
+        opt_types = [int, xmlrpclib.DateTime]
+        yield ('CHANGESET_VIEW', 
+                tuple(rpc_opt_sigs(list, None, opt_types, \
+                                    opt_types, [bool])
+                  ), 
+                 self.getRevisions)
+        yield ('CHANGESET_VIEW', 
+                ((list, str, int, xmlrpclib.DateTime),
+                 (list, str, int, int),
+                 (list, str, int),
+                 (list, str),), 
+                 self.getFileHistory)
+                 
+    # Exported methods
+    def ls(self, req, files, filter_by=None, rev=None, rec=False, depth=None):
+        r"""List information about the FILEs. The root path makes 
+        reference to the top-level folder configured for the 
+        repository according to some options in `trac.ini`. File path 
+        separator is always `/`.
+        
+        @param files      target files and|or folders
+        @param filter_by  a UNIX filename pattern used to filter the 
+                          results.
+        @param rev        target revision number. If missing (or 
+                          negative) it defaults to HEAD (youngest).
+        @param rec        list files contained in this folder and 
+                          its subfolders recursively.
+        @param depth      if recursive mode is on, specify the maximum
+                          recursion level. In this case use `0` for 
+                          files in the same folder (i.e. no recursion), 
+                          `1` for files in folder and its immediate 
+                          child entries, `2`, `3` ... and so on. If 
+                          this value is negative then full recursion 
+                          is performed.
+        @return           a list containing the full name (i.e. 
+                          relative to the root folder in VCS) of all 
+                          the files inside the input folders. File 
+                          names are not generated in order so files 
+                          in a folder and its subfolders may appear 
+                          in different positions.
+        """
+        repos = RepositoryManager(self.env).get_repository(req.authname)
+        if rev < 0: rev = None
+        if depth < 0: depth = None
+        no_depth = depth is None
+        already = dict()
+        
+        for item in files:
+          if isinstance(item, types.StringTypes):
+            try:
+              d, node = 0, repos.get_node(item, rev)
+              path = item
+            except NoSuchNode:
+              continue
+          else:
+            d, node = item
+            path = node.path
+          if not already.has_key(path): # Dont process filename twice
+            if node.isfile:
+              if not filter_by or fnmatch(fnm, filter_by):
+                yield path
+                already[path] = None    # Mark filename
+            elif node.isdir:
+              for child in node.get_entries:
+                if not filter_by or fnmatch(fnm, filter_by):
+                  yield child.path
+                  already[child.path] = None  # Mark filename
+                if child.isdir and rec and (no_depth or d < depth):
+                  files.append([d + 1, child])
+            else:
+              self.log.error("Unknown node type %s at %s", \
+                                                node.kind, node.path)
+    
+    def getFileAttributes(self, req, files, rev=None):
+        r"""Retrieve the attributes of a group of files. The root 
+        path makes reference to the top-level folder configured for 
+        the repository according to some options in `trac.ini`. File 
+        path separator is always `/`.
+        
+        @param files      target files and|or folders
+        @param rev        target revision number. If missing (or 
+                          negative) it defaults to HEAD (youngest).
+        @return           a list of dictionaries. Each one of them 
+                          contains the attributes of the input file 
+                          at that position. All data is calculated 
+                          with respect to the target revision (see 
+                          `rev` parameter). If the file does not 
+                          exist or was created after commiting the 
+                          target revision then `None` is returned 
+                          instead. The following attributes are 
+                          supported:
+                          
+                          - path :    filename (full path from root 
+                                      folder)
+                          - kind :    the type of node (e.g. one of 
+                                        'file' or 'dir') at `path`.
+                          - size :    file size in bytes
+                          - ext  :    file extension
+                          - mime :    MIME type if known
+                          - changed : Modification date (date of `lastrev`)
+                          - lastrev : Revision (prior to the target 
+                                      revision) when the latest 
+                                      modifications to this file were 
+                                      commited
+                          - log :     Commit message for last revision 
+                                      (`chgrev`)
+        """
+        repos = RepositoryManager(self.env).get_repository(req.authname)
+        if rev < 0:
+          rev = None
+        mimeview = Mimeview(self.env)
+        changesets = {}
+        for path in files:
+          try:
+            node = repos.get_node(path, rev)
+          except NoSuchNode:
+            yield None
+            continue
+          _rev = node.rev
+          attrs = dict(path=node.path, kind=node.kind, lastrev=_rev)
+          try:
+            chgset = changesets[_rev]
+            attrs.update(dict(changed=chgset.date, log=chgset.message))
+          except KeyError:
+            try:
+              changesets[_rev] = chgset = repos.get_changeset(_rev)
+              attrs.update(dict(changed=chgset.date, log=chgset.message))
+            except NoSuchChangeset:
+              changesets[_rev] = attrs['changed'] = attrs['log'] = None
+          if node.isdir:
+            attrs.update(dict(sz=0, ext='', mime=''))
+          elif node.isfile:
+            # Copycat ! from trac.versioncontrol.web_ui.browser
+            # MIME type detection 
+            content = node.get_content()
+            _chunk = content.read(CHUNK_SIZE)
+            mime_type = node.content_type
+            if not mime_type or mime_type == 'application/octet-stream':
+                mime_type = mimeview.get_mimetype(node.name, _chunk) or \
+                            mime_type or 'text/plain'
+            
+            attrs.update(sz=node.get_content_length(), \
+                          ext=splitext(node.path)[-1][:1], \
+                          mime=mime_type)
+          else:
+            self.log.error("Unknown node type %s at %s", \
+                                              node.kind, node.path)
+          yield attrs
+    
+    REV_ATTRS = ['rev', 'message', 'author', 'date']
+    
+    def getRevisions(self, req, since=None, until=None, full=False):
+        r"""Retrieve information about all revisions in a time 
+        interval. Younger revisions should appear first.
+        
+        @param since      boundary value. Revisions older than 
+                          this value will not be retrieved. Dates 
+                          and revision numbers are both accepted.
+        @param until      boundary value. Younger revisions 
+                          will not be retrieved. Dates 
+                          and revision numbers are both accepted.
+        @param full       level of detail used to generate the data. 
+                          If false only revision numbers 
+                          are returned, else tuples of the form 
+                          (rev, message, author, date) are returned 
+                          for each changeset, where :
+                          
+                          - rev     : is the revision number (ID)
+                          - message : commit message provided by 
+                                      Version Control System
+                          - author  : the user that committed changes 
+                                      in this revision.
+                          - date    : the date when this revision was 
+                                      commited (as determined by VCS).
+        @return           a list of data representing each 
+                          revision in interval. The kind of 
+                          information returned is controlled by `full` 
+                          parameter.
+                          
+                          Note: Some revisions may be skipped if 
+                          permissions state that the user performing 
+                          the request has no access to that 
+                          particular changeset.
+        """
+        repos = RepositoryManager(self.env).get_repository(req.authname)
+        def iter_revs(lastrev):
+          rev = lastrev
+          while rev:
+            yield rev
+            rev = repos.previous_rev(rev)
+        seq = iter_revs(isinstance(until, int) and until or \
+                                                  repos.youngest_rev)
+        seq = _filter_revs(seq, repos, req, since, until, full)
+        if full:
+          return (tuple(getattr(chg, a) for a in self.REV_ATTRS) \
+                    for rev, chg in seq)
+        else:
+          return (rev for rev, chg in seq)
+    
+    def getFileHistory(self, req, path, rev=None, since=None):
+        r"""Retrieve information about all the changes performed on a 
+        file or directory in a time interval.
+        
+        @param path       file path in repository.
+        @param since      boundary value. Revisions older than 
+                          this value will not be retrieved. Dates 
+                          and revision numbers are both accepted.
+        @param rev        boundary value. Younger revisions 
+                          will not be retrieved. Only revision 
+                          numbers are allowed in this argument.
+        @return           a list of `(path, rev, chg)` tuples, 
+                          one for each revision in which the target 
+                          was changed. This generator will follow 
+                          copies and moves of a node (if the 
+                          underlying version control system supports
+                          that), which will be indicated by the 
+                          first element of the tuple (i.e. the path) 
+                          changing. Starts with an entry for the 
+                          current revision.
+                          
+                          - path      : path the change was performed 
+                                        upon
+                          - rev       : revision number (ID)
+                          - chg       : the kind of change being 
+                                        performed. Supported values 
+                                        are :
+                            * add     : target was placed under 
+                                        version control
+                            * copy    : target was copied from 
+                                        another location
+                            * delete  : target was deleted
+                            * edit    : target was modified
+                            * move    : target was moved to 
+                                        another location
+                          
+                          Note: Some revisions may be skipped if 
+                          permissions state that the user performing 
+                          the request has no access to that 
+                          particular changeset.
+        """
+        if not isinstance(rev, (int, type(None))):
+          raise ValueError("Revision number must be an integer value")
+        repos = RepositoryManager(self.env).get_repository(req.authname)
+        try:
+          node = repos.get_node(filename, rev)
+        except NoSuchNode:
+          return []
+        seq = node.get_history()
+        seq = _filter_revs(seq, repos, req, since, rev, False, \
+                            accessor=lambda x: x[1])
+        return (x[0] for x in seq)
+    
+    def enumChanges(self, req, rev=None):
+        r"""Enumerate all the changes performed at a given revision.
+        
+        @param rev        The revision 
+                          numbers are allowed in this argument.
+        @return           A list of tuples describing every change in 
+                          the changeset. The tuple will contain 
+                          `(path, kind, change, base_path, base_rev)`,
+                          where :
+                          
+                          - change    : the kind of change being 
+                                        performed (for further details 
+                                        read the docstrings of 
+                                        `getFileHistory` method).
+                          - kind      : the type of node (e.g. one of 
+                                        'file' or 'dir') at `path`.
+                          - path      : is the targeted path for the 
+                                        `change` (which is the 
+                                        ''deleted'' path  for a 
+                                        DELETE change).
+                          - base_path : the source path for the
+                                        action (`None` in the case of 
+                                        an ADD change).
+                          - base_rev  : the source rev for the
+                                        action (`-1` in the case of 
+                                        an ADD change).
+                          
+                          Note: Some revisions may be skipped if 
+                          permissions state that the user performing 
+                          the request has no access to that 
+                          particular changeset.
+        """
+        repos = RepositoryManager(self.env).get_repository(req.authname)
+        try:
+          chgset = repos.get_changeset(rev)
+        except NoSuchChangeset:
+          return []
+        return chgset.get_changes()
+

trac-dev/gviz/setup.cfg

-[egg_info]
-tag_build = 
-tag_date = 0
-tag_svn_revision = 0
-
+[egg_info]
+tag_build = 
+tag_date = 0
+tag_svn_revision = 0
+

trac-dev/gviz/setup.py

     (1, 2, 3),
     (1, 3, 1),
     (1, 3, 2),
+    (1, 3, 3),
     ]
     
 latest = '.'.join(str(x) for x in versions[-1])

trac-dev/gviz/stdhash.py

+#!/usr/bin/env python
+# -*- coding: UTF-8 -*-
+
+# Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+r"""Components reusing the Secure Hash Algorithms already found in 
+Python stdlib.
+
+Copyright 2009-2011 Olemis Lang <olemis at gmail.com>
+Licensed under the Apache License, Version 2.0 
+"""
+
+from trac.core import Component, ExtensionPoint, implements, TracError
+from trac.config import Option
+
+from api import IHashLibrary, GVizInvalidConfig
+
+import hashlib
+from zlib import adler32, crc32
+
+__all__ = 'HashLib', 'ZLibChecksum'
+
+__metaclass__ = type
+
+class HashLib(Component):
+  r"""Secure Hash Algorithms supported by `hashlib` standard module.
+  
+  Supports the following methods:
+    - sha1, sha224, sha256, sha384, sha512 : as defined in FIPS 180-2 
+    - md5 : RSA's MD5 algorithm (defined in Internet RFC 1321)
+    - Additional algorithms may also be available depending upon the 
+      OpenSSL library that Python uses on your platform.
+  """
+  implements(IHashLibrary)
+  
+  # IHashLibrary methods
+  def get_hash_properties(self, method_name):
+    r"""Determine whether the requested method is a standard hash 
+    algorithm (i.e. md5, sha1, sha224, sha256, sha384, and sha512), or 
+    is implemented by the OpenSSL library that Python uses on your 
+    platform, or is not supported by `hashlib` module.
+    
+    @param method_name  the name identifying the hash method
+    @return             `None` if the method is not supported by 
+                        `hashlib` module, a tuple of the form 
+                        (priority, source) otherwise.
+                        
+                        priority: 0   - OpenSSL, 
+                                  199 - standard hash algorithms.
+                        source:   0   - OpenSSL. 
+                                  100 - standard hash algorithms
+    """
+    if (not method_name.startswith('_')) and \
+        method_name != 'new' and \
+        hasattr(hashlib, method_name):
+      return (199, 100)
+    else:
+      try:
+        hashlib.new(method_name)
+        return (0, 0)
+      except ValueError:
+        return None
+    
+  def new_hash_obj(self, method_name, data=None):
+    r"""Create a new hash object.
+    """
+    try:
+      meth = getattr(hashlib, method_name)
+      if data is None:
+        return meth()
+      else:
+        return meth(data)
+    except AttributeError:
+      try:
+        return hashlib.new(method_name)
+      except ValueError:
+        raise GVizInvalidConfig("Unsupported hash algorithm '%s'" \
+                                  % (method_name,))
+    else:
+      raise
+
+class ZLibChecksum(Component):
+  r"""Checksum Algorithms supported by `zlib` standard module.
+  
+  Supports the following methods:
+    - adler32 : Adler-32 checksum of string.
+    - crc32   : Compute a CRC (Cyclic Redundancy Check) checksum of 
+                string. 
+  """
+  implements(IHashLibrary)
+  
+  # IHashLibrary methods
+  def get_hash_properties(self, method_name):
+    r"""Determine whether the requested method is defined by `zlib` 
+    standard module.
+    
+    @param method_name  the name identifying the hash method
+    @return             (199, 100) if `method_name` is either 
+                        `adler32` or `crc32`, or `None` otherwise.
+    """
+    if method_name in ['adler32', 'crc32']:
+      return (199, 100)
+    else:
+      return None
+    
+  class ZLibChecksumObject:
+    r"""Hash objects for zlib checksum methods.
+    """
+    digest_size = 4
+    block_size = 4 # FIX : Dont remember now
+    def __init__(self, method_name):
+      self.args = ()
+      self.chksum_method = {'adler32': adler32,
+                            'crc32': crc32}.get(method_name)
+      if self.chksum_method is None:
+        raise ValueError("Unsupported checkum method '%s'" % \
+                          (method_name,))
+    def update(self, data):
+      r"""Update the hash object with the string arg. Repeated calls 
+      are equivalent to a single call with the concatenation of all 
+      the arguments: m.update(a); m.update(b) is equivalent to 
+      m.update(a+b). .
+      """
+      self.args = (self.chksum_method(data, *self.args),)
+    def digest(self):
+      r"""Return the digest of the strings passed to the update() 
+      method so far. This is a string of 4 bytes which may contain 
+      non-ASCII characters, including null bytes.
+      """
+      try:
+        chksum, digest = self.args[0], []
+      except ValueError:
+        return None
+      else:
+        for x in xrange(4):
+          digest.append(chr(chksum & 0xFF))
+          chksum >>= 8
+        return ''.join(reversed(digest))
+    def hexdigest(self):
+      r"""Like digest() except the digest is returned as a string of 
+      double length, containing only hexadecimal digits. This may be 
+      used to exchange the value safely in email or other non-binary 
+      environments. 
+      """
+      try:
+        return hex(self.args[0])[2:]
+      except ValueError:
+        return None
+    def copy(self):
+      new_obj = ZLibChecksumObject(self.chksum_method.func_name)
+      new_obj.args = tuple(self.args)
+      return new_obj
+  
+  def new_hash_obj(self, method_name, data=None):
+    r"""Create a new hash object.
+    """
+    try:
+      ho = self.ZLibChecksumObject(method_name)
+    except ValueError:
+      raise GVizInvalidConfig("Unsupported hash algorithm '%s'" \
+                                  % (method_name,))
+    else:
+      if data is not None:
+        ho.update(data)
+      return ho
+

trac-dev/gviz/ticket.py

 #   See the License for the specific language governing permissions and
 #   limitations under the License.
 
-"""Data sources used to publish the data managed by Trac tickets
+r"""Data sources used to publish the data managed by Trac tickets
 system. This includes ticket specific data, and information about 
 project milestones, versions, components, and also ticket types, 
-status, priority, severity and resolution values.
+status, priority, severity and resolution values. Access to 
+data in TracReports is also handled in here.
 
 Note: It relies on Trac XML-RPC plugin.
 
 Licensed under the Apache License, Version 2.0 
 """
 
-from trac.util.datefmt import utc, _epoc
+from trac.core import Component, implements, TracError
+from trac.ticket.query import Query, QuerySyntaxError
+from trac.ticket.report import ReportModule
 from trac.ticket.roadmap import RoadmapModule, \
                                 get_tickets_for_milestone, \
                                 apply_ticket_permissions, \
                                 get_ticket_stats, milestone_stats_data
+from trac.util.datefmt import utc, _epoc
+from trac.util.text import to_unicode
+from trac.util.translation import _
+from trac.web.href import Href
 
 from api import gviz_col, gviz_param, GVizBadRequest
 from util import GVizXMLRPCAdapter, map_with_id, map_many_with_id, \
-                    map_value_with_id
+                    map_value_with_id, dummy_request, get_column_desc, \
+                    REQFIELDS_DESC, REQFIELDS_DEFAULTS
 
 from datetime import datetime
 from xmlrpclib import DateTime
-from itertools import chain, repeat
+from itertools import chain, repeat, imap
 import types
+from urlparse import urlunparse, urlparse
 
 #--------------------------------------------------
 # Ticket models and enums
 #--------------------------------------------------
 
 class GVizModelDataSource(GVizXMLRPCAdapter):
-    """Base class for all those data sources exposing Trac resources'
+    r"""Base class for all those data sources exposing Trac resources'
     details with the help of XML RPC handlers following the model 
     adopted in the standard XML RPC implementation. In other words, 
     this data source relies on an XML RPC handler (determined by the
-    namespace name returned by `gviz_namespace` method) exporting 
-    the following methods :
+    namespace name returned by one of `xmlrpc_namespace` or 
+    `gviz_namespace` methods) exporting the following methods :
     
     - getAll : Get a list of all resources names.
     - get : Return a mapping from resource attributes to its value.
         return self._schema
     
     def _clean_dates(self, result):