Commits

Chris Mutel  committed e4e5acd

0.9.0-alpha. Too many changes to count

  • Participants
  • Parent commits 43ad8ba

Comments (0)

Files changed (11)

-Brightway2-calc
-===============
+Brightway2 calculations
+=======================
 
-The calculation engine for the Brightway2 life cycle assessment framework.
+This package provides the calculation engine for the `Brightway2 life cycle assessment framework <http://brightwaylca.org>`_. `Online documentation <http://bw2calc.readthedocs.org>`_ is available, and the source code is hosted on `Bitucket <https://bitbucket.org/cmutel/brightway2-calc>`_.
 
 The emphasis here has been on speed of solving the linear systems, for normal LCA calculations, graph traversal, or Monte Carlo uncertainty analysis.
 
 The Monte Carlo LCA class can do about 30 iterations a second (on a 2011 MacBook Pro). Instead of doing LU factorization, it uses an initial guess and the conjugant gradient squared algorithm.
 
 The multiprocessing Monte Carlo class (ParallelMonteCarlo) can do about 100 iterations a second, using 7 virtual cores. The MultiMonteCarlo class, which does Monte Carlo for many processes (and hence can re-use the factorized technosphere matrix), can do about 500 iterations a second, using 7 virtual cores. Both these algorithms perform best when the initial setup for each worker job is minimized, e.g. by dispatching big chunks.
-
-Roadmap
--------
-
-* 0.8: Current release. All LCA and Monte Carlo functions.
-* 0.9: Documentation and inclusion of tests from Brightway 1 (updated and with coverage)
-* 1.0: Bugfixes and small changes from 0.9
-* 1.1: Graph traversal

File bw2calc/__init__.py

 from .lca import LCA
 from .simple_regionalized import SimpleRegionalizedLCA
 from .monte_carlo import MonteCarloLCA, ParallelMonteCarlo, MultiMonteCarlo
+from .graph_traversal import GraphTraversal, edge_cutter, node_pruner, \
+    d3_fd_graph_formatter

File bw2calc/graph_traversal.py

+# -*- coding: utf-8 -*
+from __future__ import division
+from . import LCA
+from brightway2 import mapping, Database
+from heapq import heappush, heappop
+import numpy as np
+
+
+class GraphTraversal(object):
+    """Master class for graph traversal."""
+    def calculate(self, demand, method, cutoff=0.005, max_calc=1e5):
+        counter = 0
+
+        lca, supply, score = self.build_lca(demand, method)
+        if score == 0:
+            raise ValueError("Zero total LCA score makes traversal impossible")
+
+        # Create matrix of LCIA CFs times biosphere flows, as these don't
+        # change. This is also the unit score of each activity.
+        characterized_biosphere = np.array(
+            (lca.characterization_matrix.data * \
+            lca.biosphere_matrix.data).sum(axis=0)).ravel()
+
+        heap, nodes, edges = self.initialize_heap(demand, lca, supply,
+            characterized_biosphere)
+        nodes, edges, counter = self.traverse(heap, nodes, edges, counter,
+            max_calc, cutoff, score, supply, characterized_biosphere, lca)
+        nodes = self.add_metadata(nodes, lca)
+
+        return {
+            'nodes': nodes,
+            'edges': edges,
+            'lca': lca,
+            'counter': counter,
+            }
+
+    def initialize_heap(self, demand, lca, supply, characterized_biosphere):
+        heap, nodes, edges = [], {}, []
+        for activity, value in demand.iteritems():
+            index = lca.technosphere_dict[mapping[activity]]
+            heappush(heap, (1, index))
+            nodes[index] = {
+                "amount": supply[index],
+                "cum": self.cumulative_score(index, supply,
+                    characterized_biosphere, lca),
+                "ind": self.unit_score(index, supply, characterized_biosphere)
+                }
+            # -1 is a special index for total demand, which can be
+            # composite. Initial edges are inputs to the
+            # functional unit.
+            edges.append({
+                "to": -1,
+                "from": index,
+                "amount": value,
+                "impact": lca.score,
+                })
+        return heap, nodes, edges
+
+    def build_lca(self, demand, method):
+        lca = LCA(demand, method)
+        lca.lci()
+        lca.lcia()
+        lca.decompose_technosphere()
+        return lca, lca.solve_linear_system(), lca.score
+
+    def cumulative_score(self, index, supply, characterized_biosphere, lca):
+        demand = np.zeros((supply.shape[0],))
+        demand[index] = supply[index]
+        return float((characterized_biosphere * lca.solver(demand)).sum())
+
+    def unit_score(self, index, supply, characterized_biosphere):
+        return float(characterized_biosphere[index] * supply[index])
+
+    def traverse(self, heap, nodes, edges, counter, max_calc, cutoff,
+            total_score, supply, characterized_biosphere, lca):
+        """
+Build a directed graph of the supply chain.
+
+Use a heap queue to store a sorted list of processes that need to be examined,
+and traverse the graph using an "importance-first" search.
+        """
+        while heap and counter < max_calc:
+            parent_score_inverted, parent_index = heappop(heap)
+            # parent_score = 1 / parent_score_inverted
+            col = lca.technosphere_matrix.data[:, parent_index].tocoo()
+            # Multiply by -1 because technosphere values are negative
+            # (consumption of inputs)
+            children = [(col.row[i], -1 * col.data[i]) for i in xrange(
+                col.row.shape[0])]
+            for activity, amount in children:
+                # Skip values on technosphere diagonal or coproducts
+                if activity == parent_index or amount <= 0:
+                    continue
+                counter += 1
+                cumulative_score = self.cumulative_score(activity, supply,
+                    characterized_biosphere, lca)
+                if abs(cumulative_score) < abs(total_score * cutoff):
+                    continue
+                # Edge format is (to, from, mass amount, cumulative impact)
+                edges.append({
+                    "to": parent_index,
+                    "from": activity,
+                    # The cumulative impact directly due to this link (weight)
+                    # Amount of this link * amount of parent demanding link
+                    "amount": amount * nodes[parent_index]["amount"],
+                    # Amount of this input
+                    "impact": amount * nodes[parent_index]["amount"] \
+                    # times impact per unit of this input
+                        * cumulative_score / supply[activity]
+                    })
+                # Want multiple incoming edges, but don't add existing node
+                if activity in nodes:
+                    continue
+                nodes[activity] = {
+                    # Total amount of this flow supplied
+                    "amount": supply[activity],
+                    # Cumulative score from all flows of this activity
+                    "cum": cumulative_score,
+                    # Individual score attributable to environmental flows
+                    # coming directory from or to this activity
+                    "ind": self.unit_score(activity, supply,
+                        characterized_biosphere)
+                    }
+                heappush(heap, (1 / cumulative_score, activity))
+
+        return nodes, edges, counter
+
+    def add_metadata(self, nodes, lca):
+        rm = dict([(v, k) for k, v in mapping.data.iteritems()])
+        rt = dict([(v, k) for k, v in lca.technosphere_dict.iteritems()])
+        lookup = dict([(index, self.get_code(index, rm, rt)) for index in nodes if index != -1])
+        new_nodes = [(-1, {
+            "code": "fu",
+            "cum": lca.score,
+            "ind": 1e-6 * lca.score,
+            "amount": 1,
+            "name": "Functional unit",
+            "cat": "Functional unit"
+            })]
+        for key, value in nodes.iteritems():
+            if key == -1:
+                continue
+            code = lookup[key]
+            db_data = Database(code[0]).load()
+            value.update({
+                "code": code,
+                "name": db_data[code]["name"],
+                "cat": db_data[code]["categories"][0],
+                })
+            new_nodes.append((key, value))
+        return dict(new_nodes)
+
+    def get_code(self, index, rev_mapping, rev_tech):
+        return rev_mapping[rev_tech[index]]
+
+
+def edge_cutter(nodes, edges, total, limit=0.0025):
+    """The default graph traversal includes links which might be of small magnitude. This function cuts links that have small cumulative impact."""
+    to_delete = []
+    for i, e in enumerate(edges):
+        if e["impact"] < (total * limit):
+            to_delete.append(i)
+        else:
+            continue
+            # print e
+            # print nodes[e[1]]
+            # print (e[2] / nodes[e[1]]["amount"] * nodes[e[1]]["cum"]) / (total * limit)
+    return [e for i, e in enumerate(edges) if i not in to_delete]
+
+
+def node_pruner(nodes, edges):
+    """Remove nodes which have no links remaining after edge cutting."""
+    good_nodes = set([e["from"] for e in edges]).union(
+        set([e["to"] for e in edges]))
+    return dict([(k, v) for k, v in nodes.iteritems() if k in good_nodes])
+
+
+def extract_edges(arr, mapping, ignore):
+    edges = []
+    for i in range(arr.shape[0]):
+        if mapping[i] in ignore:
+            continue
+        for j in range(arr.shape[1]):
+            if mapping[j] in ignore or i == j or arr[i, j] == 0:
+                continue
+            edges.append((mapping[j], mapping[i], float(arr[i, j])))
+    return edges
+
+
+def rationalize_supply_chain(nodes, edges, total, limit=0.005):
+    """
+This class takes nodes and edges, and removes nodes to edges with low individual scores and reroutes the edges.
+    """
+    nodes_to_delete = [key for key, value in nodes.iteritems() if \
+        value["ind"] < (total * limit) and key != -1]
+    size = len(nodes)
+    arr = np.zeros((size, size), dtype=np.float32)
+    arr_map = dict([(key, index) for index, key in enumerate(sorted(nodes.keys()))])
+    rev_map = dict([(v, k) for k, v in arr_map.iteritems()])
+    for outp, inp, amount in edges:
+        arr[arr_map[inp], arr_map[outp]] = amount
+    for node in nodes_to_delete:
+        index = arr_map[node]
+        increment = (arr[:, index].reshape((-1, 1)) * arr[index, :].reshape((1, -1)))
+        arr += increment
+    new_edges = []
+    new_edges = extract_edges(arr, rev_map, nodes_to_delete)
+    new_nodes = dict([(k, v) for k, v in nodes.iteritems() if k not in nodes_to_delete])
+    return new_nodes, new_edges
+
+
+def d3_fd_graph_formatter(nodes, edges, total):
+        # Sort node ids by highest cumulative score first
+        node_ids = [x[1] for x in sorted(
+            [(v["cum"], k) for k, v in nodes.iteritems()])]
+        new_nodes = [nodes[i] for i in node_ids]
+        lookup = dict([(key, index) for index, key in enumerate(node_ids)])
+        new_edges = [{
+            "source": lookup[e["to"]],
+            "target": lookup[e["from"]],
+            "amount": e["impact"]
+            } for e in edges]
+        return {"edges": new_edges, "nodes": new_nodes,
+            "total": total}

File bw2calc/lca.py

 from brightway2 import databases, methods, mapping
 from bw2data.proxies import OneDimensionalArrayProxy, \
     CompressedSparseMatrixProxy
-from bw2data.utils import MAX_INT_32
+from bw2data.utils import MAX_INT_32, TYPE_DICTIONARY
 from fallbacks import dicter
 from scipy.sparse.linalg import factorized, spsolve
 from scipy import sparse
             self.config.dir, "processed", "%s.pickle" % name), "rb")
             ) for name in self.databases])
         # Technosphere
-        self.tech_params = params[np.where(params['technosphere'] == True)]
-        self.bio_params = params[np.where(params['technosphere'] == False)]
+        self.tech_params = params[
+            np.hstack((
+                np.where(params['type'] == TYPE_DICTIONARY["technosphere"])[0],
+                np.where(params['type'] == TYPE_DICTIONARY["production"])[0]
+                ))
+            ]
+        self.bio_params = params[np.where(params['type'] == TYPE_DICTIONARY["biosphere"])]
         self.technosphere_dict = self.build_dictionary(np.hstack((
-            self.tech_params['input'], self.tech_params['output'],
-            self.bio_params['output'])))
+            self.tech_params['input'],
+            self.tech_params['output'],
+            self.bio_params['output']
+            )))
         self.add_matrix_indices(self.tech_params['input'], self.tech_params['row'],
             self.technosphere_dict)
         self.add_matrix_indices(self.tech_params['output'], self.tech_params['col'],
         self.cf_params = params[np.where(params['index'] != MAX_INT_32)]
 
     def build_technosphere_matrix(self, vector=None):
-        vector = self.tech_params['amount'] if vector is None else vector
+        vector = self.tech_params['amount'].copy() \
+            if vector is None else vector
         count = len(self.technosphere_dict)
-        indices = range(count)
-        # Add ones along the diagonal
-        data = np.hstack((-1 * vector, np.ones((count,))))
-        rows = np.hstack((self.tech_params['row'], indices))
-        cols = np.hstack((self.tech_params['col'], indices))
+        technosphere_mask = np.where(self.tech_params["type"] == \
+            TYPE_DICTIONARY["technosphere"])
+        # Inputs are consumed, so are negative
+        vector[technosphere_mask] = -1 * vector[technosphere_mask]
         # coo_matrix construction is coo_matrix((values, (rows, cols)),
         # (row_count, col_count))
         self.technosphere_matrix = CompressedSparseMatrixProxy(
-            sparse.coo_matrix((data, (rows, cols)), (count, count)).tocsr(),
+            sparse.coo_matrix((vector.astype(np.float64),
+            (self.tech_params['row'], self.tech_params['col'])),
+            (count, count)).tocsr(),
             self.technosphere_dict, self.technosphere_dict)
 
     def build_biosphere_matrix(self, vector=None):
         # coo_matrix construction is coo_matrix((values, (rows, cols)),
         # (row_count, col_count))
         self.biosphere_matrix = CompressedSparseMatrixProxy(
-            sparse.coo_matrix((vector, (self.bio_params['row'],
-            self.bio_params['col'])), (row_count, col_count)).tocsr(),
+            sparse.coo_matrix((vector.astype(np.float64),
+            (self.bio_params['row'], self.bio_params['col'])),
+            (row_count, col_count)).tocsr(),
             self.biosphere_dict, self.technosphere_dict)
 
     def decompose_technosphere(self):
         vector = self.cf_params['amount'] if vector is None else vector
         count = len(self.biosphere_dict)
         self.characterization_matrix = CompressedSparseMatrixProxy(
-            sparse.coo_matrix((vector, (self.cf_params['index'], self.cf_params['index'])),
+            sparse.coo_matrix((vector.astype(np.float64),
+            (self.cf_params['index'], self.cf_params['index'])),
             (count, count)).tocsr(),
             self.biosphere_dict, self.biosphere_dict)
 

File bw2calc/simple_regionalized.py

 
         We do this by first retrieving the regionalized characterization factors for each location where characterization factors are avaiable. The intermediate data structure ``regionalized_dict`` has the following structure:
 
-        .. code:: python
+        .. code-block:: python
 
             {location_id: (biosphere matrix row number, cf value)}
 
         We then use the ``np.unique`` function to retrieve all technosphere processes, and an index into ``self.tech_params`` for each of them. We can then use this index to get a location for each technosphere process.
 
-        The characterization matrix has dimensions (number of biosphere flows, number of technosphere flows). For each column, we lookup the location code, and then retrieve the cf amounts and row indices from the ``regionalized_dict``. We can then build the ``characterization_matrix``. 
+        The characterization matrix has dimensions (number of biosphere flows, number of technosphere flows). For each column, we lookup the location code, and then retrieve the cf amounts and row indices from the ``regionalized_dict``. We can then build the ``characterization_matrix``.
 
         .. note:: There is a lot of duplicate data in ``characterization_matrix``, as characterization factors are provided for each technosphere process, regardless of whether that technosphere location has been seen already.
 

File bw2calc/tests/__init__.py

+from .lca import LCACalculationTestCase

File bw2calc/tests/lca.py

+from .. import LCA
+from bw2data import *
+from bw2data.tests import BW2DataTest
+import numpy as np
+
+
+class LCACalculationTestCase(BW2DataTest):
+    def add_basic_biosphere(self):
+        biosphere = Database("biosphere")
+        biosphere.register("Made for tests", [], 1)
+        biosphere.write({
+            ("biosphere", 1): {
+                'categories': ['things'],
+                'code': 1,
+                'exchanges': [],
+                'name': 'an emission',
+                'type': 'emission',
+                'unit': 'kg'
+                }})
+        biosphere.process()
+
+    def test_basic(self):
+        test_data = {
+            ("t", 1): {
+                'exchanges': [{
+                    'amount': 0.5,
+                    'input': ('t', 2),
+                    'type': 'technosphere',
+                    'uncertainty type': 0},
+                    {'amount': 1,
+                    'input': ('biosphere', 1),
+                    'type': 'biosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            ("t", 2): {
+                'exchanges': [],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            }
+        self.add_basic_biosphere()
+        test_db = Database("t")
+        test_db.register("Made for tests", ["biosphere"], 2)
+        test_db.write(test_data)
+        test_db.process()
+        lca = LCA({("t", 1): 1})
+        lca.lci()
+        answer = np.zeros((2,))
+        answer[lca.technosphere_dict[mapping[("t", 1)]]] = 1
+        answer[lca.technosphere_dict[mapping[("t", 2)]]] = 0.5
+        self.assertTrue(np.allclose(answer, lca.supply_array.data))
+
+    def test_production_values(self):
+        test_data = {
+            ("t", 1): {
+                'exchanges': [{
+                    'amount': 2,
+                    'input': ('t', 1),
+                    'type': 'production',
+                    'uncertainty type': 0},
+                    {'amount': 0.5,
+                    'input': ('t', 2),
+                    'type': 'technosphere',
+                    'uncertainty type': 0},
+                    {'amount': 1,
+                    'input': ('biosphere', 1),
+                    'type': 'biosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            ("t", 2): {
+                'exchanges': [],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            }
+        self.add_basic_biosphere()
+        test_db = Database("t")
+        test_db.register("Made for tests", ["biosphere"], 2)
+        test_db.write(test_data)
+        test_db.process()
+        lca = LCA({("t", 1): 1})
+        lca.lci()
+        answer = np.zeros((2,))
+        answer[lca.technosphere_dict[mapping[("t", 1)]]] = 0.5
+        answer[lca.technosphere_dict[mapping[("t", 2)]]] = 0.25
+        self.assertTrue(np.allclose(answer, lca.supply_array.data))
+
+    def test_substitution(self):
+        test_data = {
+            ("t", 1): {
+                'exchanges': [{
+                    'amount': -1,  # substitution
+                    'input': ('t', 2),
+                    'type': 'technosphere',
+                    'uncertainty type': 0},
+                    {'amount': 1,
+                    'input': ('biosphere', 1),
+                    'type': 'biosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            ("t", 2): {
+                'exchanges': [],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            }
+        self.add_basic_biosphere()
+        test_db = Database("t")
+        test_db.register("Made for tests", ["biosphere"], 2)
+        test_db.write(test_data)
+        test_db.process()
+        lca = LCA({("t", 1): 1})
+        lca.lci()
+        answer = np.zeros((2,))
+        answer[lca.technosphere_dict[mapping[("t", 1)]]] = 1
+        answer[lca.technosphere_dict[mapping[("t", 2)]]] = -1
+        self.assertTrue(np.allclose(answer, lca.supply_array.data))
+
+    def test_circular_chains(self):
+        test_data = {
+            ("t", 1): {
+                'exchanges': [{
+                    'amount': 0.5,
+                    'input': ('t', 2),
+                    'type': 'technosphere',
+                    'uncertainty type': 0},
+                    {'amount': 1,
+                    'input': ('biosphere', 1),
+                    'type': 'biosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            ("t", 2): {
+                'exchanges': [{
+                    'amount': 0.1,
+                    'input': ('t', 1),
+                    'type': 'technosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            }
+        self.add_basic_biosphere()
+        test_db = Database("t")
+        test_db.register("Made for tests", ["biosphere"], 2)
+        test_db.write(test_data)
+        test_db.process()
+        lca = LCA({("t", 1): 1})
+        lca.lci()
+        answer = np.zeros((2,))
+        answer[lca.technosphere_dict[mapping[("t", 1)]]] = 20 / 19.
+        answer[lca.technosphere_dict[mapping[("t", 2)]]] = 10 / 19.
+        self.assertTrue(np.allclose(answer, lca.supply_array.data))
+
+    def test_dependent_databases(self):
+        pass
+
+    def test_demand_type(self):
+        with self.assertRaises(ValueError):
+            LCA(("foo", 1))
+        with self.assertRaises(ValueError):
+            LCA("foo")
+        with self.assertRaises(ValueError):
+            LCA([{"foo": 1}])
+
+    def test_decomposed_uses_solver(self):
+        test_data = {
+            ("t", 1): {
+                'exchanges': [{
+                    'amount': 0.5,
+                    'input': ('t', 2),
+                    'type': 'technosphere',
+                    'uncertainty type': 0},
+                    {'amount': 1,
+                    'input': ('biosphere', 1),
+                    'type': 'biosphere',
+                    'uncertainty type': 0}],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            ("t", 2): {
+                'exchanges': [],
+                'type': 'process',
+                'unit': 'kg'
+                },
+            }
+        self.add_basic_biosphere()
+        test_db = Database("t")
+        test_db.register("Made for tests", ["biosphere"], 2)
+        test_db.write(test_data)
+        test_db.process()
+        lca = LCA({("t", 1): 1})
+        lca.lci(factorize=True)
+        # Indirect test because no easy way to test a function is called
+        lca.technosphere_matrix = None
+        self.assertEqual(float(lca.solve_linear_system().sum()), 1.5)

File docs/conf.py

 
 # General information about the project.
 project = u'Brightway2-calc'
-copyright = u'2012, Chris Mutel'
+copyright = u'2013, Chris Mutel'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
 # built documents.
 #
 # The short X.Y version.
-version = '0.8'
+version = '0.9'
 # The full version, including alpha/beta/rc tags.
-release = '0.8'
+release = '0.9.0-alpha'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.

File docs/index.rst

 .. Brightway2-calc documentation master file, created by
    sphinx-quickstart on Mon Nov 19 09:21:19 2012.
 
-Welcome to Brightway2-calc's documentation!
-===========================================
+Brightway2-calc
+===============
 
-Contents:
+This is the technical documentation for Brightway2-calc, part of the `Brightway2 <http://brightwaylca.org>`_ life cycle assessment calculation framework. The following online resources are available:
+
+* `Source code <https://bitbucket.org/cmutel/brightway2-calc>`_
+* `Documentation on Read the Docs <http://bw2calc.readthedocs.org>`_
+* `Test coverage <http://coverage.brightwaylca.org/calc/index.html>`_
+
+Running tests
+-------------
+
+To run the tests, install `nose <https://nose.readthedocs.org/en/latest/>`_, and run ``nosetests``.
+
+Building the documentation
+--------------------------
+
+Install `sphinx <http://sphinx.pocoo.org/>`_, and then change to the ``docs`` directory, and run ``make html`` (or ``make.bat html`` in Windows).
+
+Table of Contents
+-----------------
 
 .. toctree::
    :maxdepth: 2
 
    technical
 
-
 Indices and tables
 ==================
 
 * :ref:`genindex`
 * :ref:`modindex`
 * :ref:`search`
-

File docs/technical.rst

 Static calculations
 ===================
 
+.. autoclass:: bw2calc.LCA
+    :members:
 
 Uncertainty analysis
 ====================
 
-
-Calculation classes
-===================
-
-.. autoclass:: bw2calc.LCA
-    :members:
-
-.. autoclass:: bw2calc.SimpleRegionalizedLCA
-    :members:
-
 .. autoclass:: bw2calc.MonteCarloLCA
     :members:
 
 .. autoclass:: bw2calc.ParallelMonteCarlo
     :members:
+
+Graph traversal
+===============
+
+.. autoclass:: bw2calc.GraphTraversal
+    :members:
+
+Regionalization
+===============
+
+.. autoclass:: bw2calc.SimpleRegionalizedLCA
+    :members:
 
 setup(
   name='bw2calc',
-  version="0.8.2",
+  version="0.9.0-alpha",
   packages=["bw2calc", "bw2calc.tests"],
   author="Chris Mutel",
   author_email="cmutel@gmail.com",