Commits

dan mackinlay  committed 3c121f2

purge all that cruft

  • Participants
  • Parent commits 6c45f3e

Comments (0)

Files changed (81)

File overblown_version_for_reference/TODO.rst

-====
-TODO
-====
-
-Action plan 
-===========
-
-Sub-lists are are in descending priority order.
-
-# Absolute top priority task killing finality
-  
-  # work out why velocities all appear to be in the same quadrant for quite
-    high noise runs
-  # bootstrap everything
-  # keep trial stat arrays around in summaries so that R can do the
-    aggregation
-  # generate the final ultimate kernel density estimates of behaviours in R
-  # have different number of samples per-axis
-  
-# Faster simulation
-
-  # throw out some normalisations- just make sure that random vectors are on
-    average unit length.
-  # use gridding to bin cells into neighbourhoods
-
-# wilder interpretations
-
-  # phase transition against the percent of distance each timestep goes to
-    randomness
-  # warp P1 axis into angular noise analog (Finding it analytically?)
-
-# Faster, more powerful MI
-
-  # use a priori binning
-  # worry about missing bins for, e.g. correctors
-  # go to CUDA for mi calcs? - http://users.cecs.anu.edu.au/~ramtin/cuda.htm
-  # use binless inference? (kernel density? spline? k-neighbour?)
-  # does a kd tree give me MI for free?
-  # use Cython to wrap `NSB_entropy <http://nsb-entropy.sourceforge.net/>`__
-    properly instead of calling as a subprocess
-  
-# Better stats
-  
-  # 2d graphs throughout for gridded sims
-  # use bootstrap estimator
-  # make sure those "Warning: Null output conditional ensemble for output
-    foo" don't reflect bad params
-  # evaluate using velocity deltas
-  # test on other phase transition data
-  # consider stationary vs non-stationary
-  # check susc. stat. Doesn't seem to be the same as order std dev.
-  # consider ergodic walks and independence
-  
-# Better sampler
-
-  # importance sampling
-  # persist job management state across crashes in the picloud dispatcher.
-  
-# Better visualisation
-
-  # animation of acting agents
-  
-    # as slices of a 2d/3d thing
-    # as `a 3d animation <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html#animating-the-data>`_ (`see also <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/auto/mlab_other_functions.html#animate>`_)
-  # mark lines for B&W using `plot <http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.plot>`_
-
-# experiments to do
-
-  # higher dimensional behaviour
-  # time window sweep
-  # small-number-of-agents analysis
-
-Axis interpretation
-===================
-
-* valuations of equities
-* buyin' and sellin' agents
-* equities (must conserve)
-* total market value.
-* think about stock market in terms of axes and dimensions - equities are
-  close if they have similar historical price movements - that is, how good
-  they have been as predictors of each other in the past - rather than
-  current price value. and price changes come from stock movements. How
-  would one link equities and traders in this fashion?
-
-Pie-in-the-sky
-==============
-
-* explore alternative distance measures that allow similarity in a small
-  number of dimensions (Cosine distance? correlation distance? extended
-  jaccard distance?)
-* scale number of agents to keep Pi_4 constant in higher dimensions
-* add option to remove self-adjacency from calculations at all stages
-  and explore consequences
-* sanity-check minkowski distance
-* handle more boundary constraints
-  * esp. bounded-below-only boundary conditions
-* calculate prices using demand curve
-* integrate Sumatra
-* integrate Ruffus
-* better test coverage
-* document for posterity
-* contribute back to pyentropy
-* export to a gephi animated graph

File overblown_version_for_reference/__init__.py

Empty file removed.

File overblown_version_for_reference/bin/run_script_with_args.sh

-#!/bin/sh
-# Because OpenPBS is fucked, it cannot pass arguments to scripts, but only set environment variables.
-# So we build this
-# sophisticated functionality using a tottering monolith of subtle and 
-# subtle code that shows how functionality such as this might be accomplished despite
-# the fierce difficulty
-#
-# Usage:
-# qsub run_script_with_args.sh -v 'PBS_COMMAND=ls -l'
-exec $PBS_COMMAND

File overblown_version_for_reference/conversion.py

-#!/usr/bin/env python
-# encoding: utf-8
-"""
-conversion.py
-
-IO routes. only CSV IO ATM, for Terrys convenience.
-
-Created by dan mackinlay on 2010-10-28.
-Copyright (c) 2010 __MyCompanyName__. All rights reserved.
-"""
-
-import numpy as np
-
-def terry_summary(ts):
-    # _axes = {0:'x', 1:'y'}
-    _stats = {'min': np.min, 'max': np.max, 'median': np.median}
-    _members = {'vels': ts.raw_output_vels, 'locs': ts.raw_output_locs}
-    for stat_name, stat_fn in _stats.iteritems():
-        for dataset_name, dataset_val in _members.iteritems():
-            fname = '-'.join([
-              'vicsek', dataset_name, stat_name
-            ]) + '.csv.gz'
-            content = np.apply_along_axis(
-              stat_fn, 1,
-              dataset_val
-            )
-            np.savetxt(
-              fname, content, fmt='%0.7f', delimiter=',')
-
-
-if __name__ == '__main__':
-  main()
-

File overblown_version_for_reference/counting.py

-#!/usr/bin/env python
-# encoding: utf-8
-"""
-counting.py
-
-Utils for iterating through sets in useful ways for me - for example, grid
-sampling the most important points first."""
-
-import math
-
-def cantor_count(i):
-    if i in (0,1):
-        return float(i)
-    elif i==2:
-        return 0.5
-    else:
-        exponent = rooter(i)
-        offset = 1 + 2*(i-(2**(exponent-1)+1))
-        return offset * 2.0**(-exponent)
-    
-def rooter(x):                    
-    return int(math.ceil(math.log(x)/math.log(2)-0.0000001))
-
-def spiral_count(i, dimensions):
-    # this can be defined recursively in a relation with lower dimensional versions of itself. yes. 
-    pass

File overblown_version_for_reference/dispatch.py

-#!/usr/bin/env python
-# encoding: utf-8
-"""
-dispatch.py
-
-Created by dan mackinlay on 2011-05-05.
-Copyright (c) 2011 __MyCompanyName__. All rights reserved.
-
-Handles concurrency management
-"""
-
-def DispatcherFactory(concurrency="none", *args, **kwargs):
-    # we do dynamic importing beause even importing picloud makes annoying
-    # error messages appear in unrelated tasks because it is as obtrusive
-    # as fuck.
-    if concurrency == "picloud":
-        from dispatchers_picloud import PiCloudDispatcher
-        Dispatcher = PiCloudDispatcher
-    elif concurrency == "picloudlocal":
-        from dispatchers_picloud import PiCloudLocalDispatcher
-        Dispatcher = PiCloudLocalDispatcher
-    elif concurrency == "local":
-        from dispatchers_basic import LocalDispatcher
-        Dispatcher = LocalDispatcher
-    elif concurrency == "pbs":
-        from dispatchers_basic import PbsDispatcher
-        Dispatcher = PbsDispatcher
-    else:
-        from dispatchers_basic import InProcessDispatcher
-        Dispatcher = InProcessDispatcher
-    return Dispatcher(*args, **kwargs)
-

File overblown_version_for_reference/dispatchers_basic.py

-import multiprocessing
-import time
-from path import path
-import settings
-from warehouse import Box, digest
-import warnings
-
-def safe_call(fn, box, *args, **kwargs):
-    try:
-         return fn(box=box, *args, **kwargs)
-    except Exception, e:
-         #commute stats errors into warnings, because weird errors raised at run time poison concurrency
-         warnings.warn(str(e))
-         return box
-
-class InProcessDispatcher(object):
-    """We give dispatchers responsiblity for the warehouse results are kept in
-    and also the actual process to run.
-    
-    The parent implementation does not actually multiprocess. Rather, we do
-    calculations in *this* process, in *this* thread, and have proper step
-    debugging and exception handling work OK."""
-    
-    def __init__(self,
-            warehouse,
-            task,
-            identifier="",
-            *args, **kwargs):
-        self.identifier = identifier
-        self.warehouse = warehouse
-        self.task = task
-        self.outbox = {} #dict mapping arg hash to result box
-        self.inbox = {} #dict mapping arg hash to result box
-        #This would be a logical place to resurrect dispatchers from previous
-        #experiment runs in order to resume state.
-        
-    def batch_dispatch(self, param_iterable):
-        """
-        The main method. iterate over this to get a lot of stuff done at once.
-        """
-        for params in param_iterable:
-            self._maybe_sow(**params)
-        self._tend()
-        for box in self._reap_all():
-            yield box
-    
-    def _sow(self, box, **params):
-        """Queue, or process, one job, providing a Box to stash results in."""
-        box.store()
-        box.clear_cache()
-        self.task(box=box, **params)
-        self.outbox[box._basename] = box
-        
-    def _maybe_sow(self, box=None, **params):
-        """Queue, or process, one job, if it has not been done already"""
-        #set the force_update flag to indicate that we should push ahead even
-        # if the box already has content
-        force_update = True
-        if box is None:
-            box = self.warehouse.Box(**params)
-            force_update = False
-        
-        #Now, if this has successfully completed before, we 
-        #are done. ("Slack memoization")
-        if force_update or len(box)==0:
-            #No data. queue for processing
-            self._sow(box=box, **params)
-        else:
-            #skip straight to the chase
-            self.outbox[box._basename] = box
-        return box._basename
-    
-    def _tend(self):
-        """do whatever massaging is necessary to get the queued jobs into the
-        outbox"""
-        pass
-        
-    def _reap_all(self):
-        """get all results back"""
-        for k in self.outbox.keys():
-            box = self.outbox.pop(k)
-            box.store()
-            yield box
-    
-    def dispatch(self, more_params, **params):
-        """helper to to one-off dispatches."""
-        params.update(more_params)
-        return self.batch_dispatch([params])
-
-
-class LocalDispatcher(InProcessDispatcher):
-    """Native local dispatch with a global process pool.
-    
-    Easier to debug than the picloud one, as it has better docs and fewer
-    inscrutable external config options intervening in its behaviour"""
-    def __init__(self, processes=2, *args, **kwargs):
-        self.in_process = {} #dict mapping arg hash to jid
-        self.pool = multiprocessing.Pool(processes=processes)
-        super(LocalDispatcher, self).__init__(*args, **kwargs)
-        self.outqueue = multiprocessing.JoinableQueue()
-            
-    def _sow(self, box, **params):
-        """Queue, or process, one job, providing a Box to stash results
-        in."""
-        self.outbox[box._basename] = self.pool.apply_async(
-          safe_call,
-          args=[self.task, box],
-          kwds=params)
-            
-    def _reap_all(self):
-        """get all results back. This is awful, coz of serialization woes
-        getting in the way of the usual machinery. """
-        print "kv", self.outbox
-        while self.outbox:
-            ready = []
-            finished = []
-            for k, v in self.outbox.iteritems():
-                #why do i end up with both Boxes and AsyncResults in the queue? Handle both here in any case.
-                if hasattr(v, 'ready') and v.ready():
-                    ready.append(k)
-                if isinstance(v, Box):
-                    finished.append(k)
-            for k in ready:
-                yield self.outbox.pop(k).get(0).store()
-            for k in finished:
-                yield self.outbox.pop(k).store()
-            time.sleep(10)
-
-          
-class PbsDispatcher(InProcessDispatcher):
-    """PBS cli dispatch, with all single-proc and concurrency handled by the server
-    """
-    def __init__(self, *args, **qsub_kwargs):
-        import pbs
-        qsub_args = dict(getattr(settings, 'BASE_QSUB_ARGS', {}))
-        qsub_args.update(qsub_kwargs)
-        self.qsub = pbs.Qsub(**qsub_kwargs)
-        self.in_process = {} #dict mapping arg hash to jid
-        super(PbsDispatcher, self).__init__(*args, **qsub_kwargs)
-        
-    def _sow(self, box, **params):
-        """Queue, or process, one job, providing a Box to stash results
-        in."""
-        import pickle
-        
-        settings.TEMP_ARGS_PATH.makedirs()
-        file_prefix = settings.TEMP_ARGS_PATH.abspath()/box._basename
-        task_file = file_prefix + '.task.pickle'
-        box_file = file_prefix + '.box.pickle'
-        kwargs_file = file_prefix + '.kwargs.pickle'
-        pickle.dump(self.task, open(task_file, 'wb'))
-        pickle.dump(box, open(box_file, 'wb'))
-        pickle.dump(params, open(kwargs_file, 'wb'))
-        
-        full_script_command = [str(path('./main.py').realpath().abspath())]
-        full_script_command.append(str(task_file))
-        full_script_command.append(str(box_file))
-        full_script_command.append(str(kwargs_file))
-        
-        self.outbox[box._basename] = self.qsub(full_script_command)
-        
-    def _reap_all(self):
-        """get all results back. This is not really implemented, coz i can;t
-        be bothered writing a real API to parse the OpenPBS CLI scripts'
-        output robustly"""
-        print "kv", self.outbox
-        # while self.outbox:
-        #     ready = []
-        #     finished = []

File overblown_version_for_reference/dispatchers_picloud.py

-import cloud
-from dispatchers_basic import InProcessDispatcher
-
-class PiCloudDispatcher(InProcessDispatcher):
-    """send this off to picloud. Note that these subclasses do job dispatch
-    via the global cloud namespace, which makes it unsafe to initialise two at
-    once with different localness.
-
-    TODO: store jid dispatch state (parameter hash -> jids) on disk to
-    allow recall should networks be disrupted and the jobs killed (in 4 jobs
-    on PiCloud so far, this has happened to me in all of them.)"""
-    def __init__(self, *args, **kwargs):
-        self.in_cloud = {} #dict mapping arg hash to jid
-        super(PiCloudDispatcher, self).__init__(*args, **kwargs)
-    
-    def _sow(self, box, **params):
-        """We keep a list of submitted jobs here, so that later we may
-        implement graceful recovery from the inevitable hangs"""
-        self.in_cloud[box._basename] = cloud.call(
-              self.task, box=box,
-              _high_cpu=True,
-              _label = ".".join(self.identifier),
-              **params)
-    
-    def _tend(self):
-        """block until results are ready? I strongly suspect that doing
-        serialising here rather than in _sow is making nasty things happen,
-        such as failing to write to disk."""
-        for box in cloud.iresult(self.in_cloud.values()):
-            box.store()
-            box.clear_cache()
-            self.outbox[box._basename] = box
-            del(self.in_cloud[box._basename])
-
-class PiCloudLocalDispatcher(PiCloudDispatcher):
-    """multiprocess this locally, using the picloud "simulator" API.
-    """
-    def __init__(self, processes=2, *args, **kwargs):
-        super(LocalDispatcher, self).__init__(*args, **kwargs)
-        cloud.config.redirect_job_output=False
-        cloud.config.num_procs=processes
-        cloud.config.max_transmit_data=10**8
-        cloud.config.commit()
-        cloud.start_simulator()

File overblown_version_for_reference/envs/__init__.py

Empty file removed.

File overblown_version_for_reference/envs/visualization_2d.py

-import os
-import traders
-import warehouse
-import settings
-import plots
-import experiments
-
-from experiments import get_trial_warehouse, get_reduced_experiment
-
-print """Need some experiment data to work with? try
->>> w=get_trial_warehouse('foo')
-Or for the aggregated data:
->>> t=get_reduced_experiment('foo')"""

File overblown_version_for_reference/envs/visualization_3d.py

-import os
-import traders
-import warehouse
-import settings
-import plots
-import experiments
-from experiments import get_trial_warehouse, get_reduced_experiment
-from mayavi import mlab
-
-print """Need some experiment data to work with? try
-Try the aggregated data:
->>> t=get_reduced_experiment('animate_me_3d')
-Or the raw stuff:
->>> w=get_trial_warehouse('animate_me_3d')
->>> t=w.find().next()
->>> ts=t.traderset
->>> v=mlab.quiver3d(*plots.quiv_components(ts))
->>> mlab.clf()
->>> for i, b in enumerate(w.find()):
->>>    plots.indexed_animated_plot_3d(b.traderset, i, size=(512,512))
-"""

File overblown_version_for_reference/examples/Find_MI_relations.m

-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% Find_MI_relations.m
-%
-% Find_MI_1.0 - Extract mutual information relations from finite data, version 1.0.
-% Copyright (C) Dec. 2004 Noam Slonim, Gurinder S. Atwal, Gasper Tkacik, and William Bialek
-%
-%    This program is free software; you can redistribute it and/or modify
-%    it under the terms of the GNU General Public License as published by
-%    the Free Software Foundation; either version 2 of the License, or
-%    (at your option) any later version.
-%
-%    This program is distributed in the hope that it will be useful,
-%    but WITHOUT ANY WARRANTY; without even the implied warranty of
-%    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-%    GNU General Public License for more details.
-%
-%    You should have received a copy of the GNU General Public License
-%    along with this program; if not, write to the Free Software
-%    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%%%%%%%%%%%%%%%%
-% Introduction %
-%%%%%%%%%%%%%%%%
-%
-% This code implements the mutual information estimation procedure used by
-% Slonim et. al in the paper entitled "Information based clustering", 2005.
-% Given a set of patterns, represented as the rows of the input matrix M,
-% we estimate the mutual information between every pair of patterns. The
-% idea behind the algorithm is based on the "direct" estimation method,
-% originaly developed by Strong et. al, PRL, 1998, for the analysis of
-% neural coding. This method is extended here so it can be equally applied
-% to general data. See "Estimating mutual information and multi information
-% in large networks", Slonim et. al, for the specific details of the
-% implementation. 
-% 
-%%%%%%%%%
-% INPUT %
-%%%%%%%%%
-%
-% M: A matrix in which every ROW corresponds to some data pattern, e.g., the expression
-% profile of a single gene under many conditions. 
-%
-% b_star: A scalar which sets the maximal quantization level. The default is floor(.5*sqrt(Ns)), where Ns is
-% the sample size, i.e., the number of columns in M. 
-%
-% RandMode: A scalar flag. When its ON (i.e., not zero) the data is scrambled to remove any
-% potential real correlations. In this case the resulting information
-% relations should be around zero. This mode is useful to determine whether
-% b_star is not too high and the results are not over estimated. The
-% default is 0. 
-%
-% SampSizePerc: A row vector which determines the sub-sample sizes used for the extrapolation
-% curve. The default is [.7 .7875 .9 1]. See the corresponding paper for
-% the reasoning behind these numbers. 
-% 
-% Trials: A row vector which sets the number of times we estimate the information in each
-% sub-sample size for the extrapolation curve. The default is [21 16 12 1]. 
-% See the corresponding paper for the reasoning behind these numbers.
-% Notice, that the dimension of this argument must be identical to the
-% dimensions of SampSizePerc.
-%
-% MISS_VAL: a scalar, typically inf, which marks missing values in the
-% matrix M. For every pair of rows in M, only values which are not MISS_VAL
-% in BOTH rows will be used for the estimation. The default is inf. 
-% 
-% MinSample: A scalar which sets the minimal sample size for estimation.
-% For pairs with a joint sample size which is smaller than MinSample, no
-% estimation is performed. The default is 100.
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%
-% Command line examples %
-%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% (1) [Res] = Find_MI_relations (M,[],[],[],[],[],[]);
-% (2) [Res] = Find_MI_relations (M,5,[],[],[],[],[]);
-% (3) [Res] = Find_MI_relations (M,5,0,[],[],[],[]);
-% (4) [Res] = Find_MI_relations (M,5,0,[.7 .7875 .9 1],[],[],[]);
-% (5) [Res] = Find_MI_relations (M,5,1,[.7 .7875 .9 1],[21 16 12 1],[],[]);
-% (6) [Res] = Find_MI_relations (M,5,0,[.7 .7875 .9 1],[21 16 12 1],inf,[]);
-% (7) [Res] = Find_MI_relations (M,5,0,[.7 .7875 .9 1],[21 16 12 1],inf,200);
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%%%%%%%%%%
-% OUTPUT %
-%%%%%%%%%%
-%
-% Res.I: Obtained pairwise mutual information relations for all patterns in
-% M. This is an N x N matrix where N is the number of rows in M.
-%
-% Res.SampSize: Indicates the joint sample size for every pair (again, 
-% an N x N matrix). 
-%
-% Res.prm: Values of the parameters used throughout the estimation. 
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function [Res] = Find_MI_relations (M,b_star,RandMode,SampSizePerc,Trials,MISS_VAL,MinSample)
-
-Res = InitRes (M,b_star,RandMode,SampSizePerc,Trials,MISS_VAL,MinSample);
-
-Res = QuantizeM (M,Res); 
-
-for i=1:Res.prm.N-1    
-    for j=i+1:Res.prm.N
-        Res = FindPairInfo (i,j,Res);        
-    end
-    if mod(i,10)==0,
-        fprintf ('First %d rows are done...\n',i);
-    end    
-end
-
-Res = rmfield (Res,'GoodInds');
-Res = rmfield (Res,'QM');    
-Res.prm.RunEnd = datestr(now);
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function Res = InitRes (M,b_star,RandMode,SampSizePerc,Trials,MISS_VAL,MinSample)
-
-Res.prm.RandMode = RandMode;
-if RandMode
-    Res.WARNING = 'RAND MODE ON';
-end
-
-Res.prm.N = size(M,1);
-Res.prm.Ns = size(M,2);
-
-if any(MinSample)
-    Res.prm.MinSample = MinSample;
-else 
-    Res.prm.MinSample = 100;
-end
-if any(SampSizePerc)
-    Res.prm.SampleSizesPerc = SampSizePerc;
-else
-    Res.prm.SampleSizesPerc = [.7 .7875 .9 1];
-end
-if any(Trials)
-    Res.prm.Trials = Trials;
-else
-    Res.prm.Trials = [21 16 12 1];
-end
-if any(b_star)
-    Res.prm.b_star = b_star;
-else
-    Res.prm.b_star = floor(.5*sqrt(Res.prm.Ns));  % Notice - this default value assumes no missing-values in M.
-end
-if any(MISS_VAL)    
-    if any(find(ismember(1:Res.prm.b_star,MISS_VAL)))
-        error (sprintf('Please use a MISS_VAL value which is not among 1,2,...,%d',Res.prm.b_star));
-    end
-    Res.prm.MISS_VAL = MISS_VAL;
-else
-    Res.prm.MISS_VAL = inf;
-end
-
-% Default parameters
-Res.prm.RunSeed = 0;
-rand ('state',Res.prm.RunSeed);
-Res.prm.RunStart = datestr(now);
-Res.prm.Documentation = 'Produced by Find_MI_relations Version 1.0';
-
-Res.I = zeros(Res.prm.N);
-Res.SampSize = zeros(Res.prm.N);
-
-for n=1:Res.prm.N
-    Res.GoodInds{n} = find(M(n,:)~=Res.prm.MISS_VAL);
-end
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function Res = QuantizeM (M,Res)
-
-for binnum = 2 : Res.prm.b_star
-    fprintf ('Quantize in advance each of the %d rows into %d bins ...\n',Res.prm.N,binnum);
-    tmpQM = zeros(size(M));
-    for n = 1 : Res.prm.N
-        good_inds = Res.GoodInds{n}; 
-        v = M(n,good_inds);
-        [qv,cutoff] = MaxEntQuantize_v (v,binnum);
-        v = M(n,:);
-        v(good_inds) = qv;
-        tmpQM(n,:) = v; % A quantized version of v into binnum bins where the MISSING VALS INCLUDED
-    end
-    Res.QM{binnum} = tmpQM;
-end
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function Res = FindPairInfo (ind1,ind2,Res)        
-
-b_star = Res.prm.b_star;
-SampleSizesPerc = Res.prm.SampleSizesPerc;
-Trials = Res.prm.Trials;
-MISS_VAL = Res.prm.MISS_VAL;
-
-EstI = ones(1,b_star)*(-inf);
-EstI_std = zeros(1,b_star);
-BestEstI = 0;
-BestEstI_binnum = 0;
-
-% Find joint indices which are not missing-values
-qv1 = Res.QM{2}(ind1,:);    
-good_inds1 = Res.GoodInds{ind1};    
-qv2 = Res.QM{2}(ind2,:);        
-qv2 = qv2(good_inds1);    
-good_inds2 = find(qv2 ~= MISS_VAL);    
-good_inds = good_inds1(good_inds2);
-sampsize = length(good_inds);
-Res.SampSize(ind1,ind2) = sampsize;
-Res.SampSize(ind2,ind1) = sampsize;
-
-if sampsize < Res.prm.MinSample % Too small sample size
-    Res.I(ind1,ind2) = -inf;
-    Res.I(ind2,ind1) = -inf;
-    return
-end
-
-for binnum = 2 : b_star
-
-    qv1 = Res.QM{binnum}(ind1,:);
-    qv2 = Res.QM{binnum}(ind2,:);
-
-    % Leave only elements for which both v1 and v2 do NOT have a missing value
-    qv1 = qv1(good_inds);
-    qv2 = qv2(good_inds);
-    
-    if Res.prm.RandMode
-        qv_length = length(qv1);
-        perm = randperm(qv_length);
-        qv2 = qv2(perm);
-    end
-    
-    [Info{binnum}] = DirectFindInfo (qv1,qv2,binnum,binnum,SampleSizesPerc,Trials);
-
-    EstI (binnum) = Info{binnum}.EstI;
-    EstI_std (binnum) = Info{binnum}.EstI_std;
-    min_tmp_EstI = EstI(binnum) - .5*EstI_std(binnum); % reduce the error-bar from the current estimate
-    max_prev_EstI = max ( EstI(1:binnum-1) + .5*EstI_std(1:binnum-1) ); % max of previous values + their errorbars
-    if min_tmp_EstI > max_prev_EstI % improvement beyond the errorbars
-        BestEstI = EstI (binnum);
-        BestEstI_binnum = binnum;
-    end
-    
-end
-
-Res.I(ind1,ind2) = BestEstI;
-Res.I(ind2,ind1) = BestEstI;
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-% Quantize v into binnum bins which are -- more or less -- equally
-% populated.
-function [qv,cutoff] = MaxEntQuantize_v (v,binnum)
-
-if isempty(v) % only missing values 
-    qv = [];
-    cutoff = [];
-    return
-end
-
-qv = v;
-[sv sv_inds] = sort(v);
-group_size1 = floor(length(v)/binnum);
-group_size2 = ceil(length(v)/binnum);   
-extra = length(v) - binnum*group_size1;
-cutoff = zeros(1,binnum-1);
-
-% Randomly choose extra bins that will get another one point
-bin_group_size = rand(1,binnum);
-[tmp sinds] = sort(bin_group_size);
-bin_group_size(sinds(1:(binnum-extra)))=1;
-bin_group_size(sinds(end-extra+1:end))=0;
-
-cutoff_ind = 0;
-for b=1:binnum-1
-    if bin_group_size(b)
-        cutoff_ind = cutoff_ind + group_size1;
-    else
-        cutoff_ind = cutoff_ind + group_size2;
-    end
-    cutoff(b) = sv(cutoff_ind);
-    if b==1
-        b_inds = find( v <= cutoff(b) );
-    else
-        b_inds = find( (v > cutoff(b-1)) & (v <= cutoff(b)) );
-    end
-    qv(b_inds) = b;
-end
-b_inds = find(v > cutoff(binnum-1));
-qv(b_inds) = binnum;
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-% This generic function is used to estimate the Mutual Information (MI) between
-% two already QUANTIZED vectors of the same dimension, via the direct
-% estimation method. 
-function [InfoOut] = DirectFindInfo (qv1,qv2,D1,D2,SampleSizesPerc,Trials)
-
-FullSample = length(qv1);
-SampleSizes = ceil(SampleSizesPerc*FullSample);
-N = length(SampleSizes);
-Ivals = zeros(N,max(Trials));
-meanIvals = zeros(N,1);
-
-for n=1:N % Loop over all the different Sample Sizes     
-    tmpSampleSize = SampleSizes(n);    
-    tmpv_leng = tmpSampleSize;            
-    for t=1:Trials(n) % Loop over all the different trials for a specific sample size    
-        if tmpSampleSize < FullSample % Choose a sample of v1 and v2 values
-            perm = randperm(FullSample);             
-            inds = perm(1:tmpSampleSize);            
-            tmpqv1 = qv1(inds);            
-            tmpqv2 = qv2(inds);            
-        else
-            tmpqv1 = qv1; 
-            tmpqv2 = qv2;
-        end            
-        tmpCounts = ConstructCountMatrix (tmpqv1,tmpqv2,D1,D2); % Construct a count matrix out of the quantized vectors                
-        Ivals(n,t) = FindInfoLocalMI (tmpCounts); % Find EMPIRICAL MI for the sample taken from v1 and v2        
-    end % over Trials
-    meanIvals(n) = mean(Ivals(n,1:Trials(n)));
-end % over different Sample Sizes
-
-% The following 4 code lines were commented out as the Matlab function 'fit' is not always available;
-% (the 2 lines below are equivalent without using 'fit'
-% [FitModel,Goodness,Output] = fit (1./SampleSizes',meanIvals,'poly1'); % linear extrapolation
-% Extrap_I = FitModel.p2; % value of the linear fit when x=0 (i.e., where the sample size goes to \infty).
-% InfoOut.FitModel = FitModel;
-% InfoOut.Goodness = Goodness;
-[a,b] = fitSimpleLinearModel (1./SampleSizes',meanIvals);
-Extrap_I = b;
-
-InfoOut.Extrap_I = Extrap_I;
-InfoOut.EstI = mean(Extrap_I);
-[tmp MinSampSizeInd] = min(SampleSizes); % This should usually be equal to 1, but just in case.
-InfoOut.EstI_std = std( Ivals(MinSampSizeInd,:) ); % take the std of the I vals obtained for the minimal sample size.
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function Counts = ConstructCountMatrix (qv1,qv2,D1,D2)
-
-Counts = zeros(D1,D2);        
-for l1=1:D1
-    inds_l1 = find(qv1==l1);
-    for l2=1:D2
-        inds_l1l2 = find(qv2(inds_l1)==l2);
-        Counts(l1,l2) = length(inds_l1l2);
-    end 
-end  
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function MI = FindInfoLocalMI (tmpC)
-
-tmpC = tmpC+eps;
-sumC = sum(sum(tmpC));
-Pxy = tmpC./sumC;
-Px=sum(Pxy');
-Hx=-sum(Px.*LocalLog(Px));
-Py=sum(Pxy);
-Hy=-sum(Py.*LocalLog(Py));
-
-MI = Hx + Hy + sum(sum(Pxy.*LocalLog(Pxy)));
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function out = LocalLog (inp)
-
-out = log2(inp);
-
-return
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-function [a,b] = fitSimpleLinearModel (x, y) 
-
-N = length(x);
-
-sx2 = sum(x.*x);
-sy2 = sum(y.*y);
-sxy = sum(x.*y);
-
-xav = mean(x);
-yav = mean(y);
- 
-ssxx = sx2 - N * xav * xav;
-ssyy = sy2 - N * yav * yav;  
-ssxy = sxy - N * xav * yav;
-
-a = ssxy / ssxx; % slope
-b = yav - a * xav; % shift from 0 on the y axis
-
-return

File overblown_version_for_reference/examples/__init__.py

-#!/usr/bin/env python
-# encoding: utf-8
-"""
-__init__.py
-
-Created by dan mackinlay on 2010-10-25.
-
-This package is a bunch of MI calculation examples
-"""

File overblown_version_for_reference/examples/mi.py

-#if you have an array of integers and the max value isn't too large you can use numpy.bincount:
-
-hist = dict((key,val) for key, val in enumerate(numpy.bincount(data)) if val)
-
-#Edit: If you have float data, or data spread over a huge range you can convert it to integers by doing:
-
-bins = numpy.unique(data)
-bincounts = numpy.bincount(numpy.digitize(data, bins) - 1)
-hist = dict(zip(bins, bincounts))
-
-# numpy.histogram(a, bins=2, range=(0,1)):
-
-#see http://projects.scipy.org/scipy/browser/trunk/scipy/ndimage/registration.py?rev=3896

File overblown_version_for_reference/examples/mst.py

-"""
-Gaël Varoquaux' MST code from http://gael-varoquaux.info/blog/?p=100 and
-http://gael-varoquaux.info/blog/wp-content/uploads/2009/05/mst_py
-
-An example ilustrating graph manipulation and display with Mayavi.
-
-Starting from random points positions around a plane, we first use VTK
-to create the Delaunay graph, and also to plot it. We then create a
-minimum spanning tree of the data with Prim's algorithm. We display it
-adding connections to Mayavi points.
-
-The visualization clearly shows that the minimum spanning tree of the
-points, considering all possible connections, is included in the Delaunay
-graph.
-"""
-# Author: Gary Ruben
-#         Gael Varoquaux <gael dot varoquaux at normalesup dot org>
-# Copyright (c) 2009, Enthought, Inc.
-# License: BSD style.
-
-from enthought.mayavi import mlab
-import numpy as np
-
-def compute_delaunay_edges(xyz, visualize=False):
-    """ Given 3-D points, returns the edges of their 
-        Delaunay triangulation.
-
-        Parameters
-        -----------
-        xyz: ndarray
-            3 rows x N columns of x, y, z coordinates of the points
-
-        Returns
-        ---------  
-        edges: 2D ndarray. 
-            The indices of the edges of the Delaunay triangulation as a 
-            (2, N) array [[pair1_index1, pair2_index1, ...],
-                          [pair1_index2, pair2_index2, .. ]]
-    """
-    x, y, z = xyz
-    if visualize:
-        vtk_source =  mlab.pipeline.scalar_scatter(x, y, z, z,
-                            name='Delaunay graph')
-    else:
-        vtk_source =  mlab.pipeline.scalar_scatter(x, y, z, figure=False)
-    delaunay =  mlab.pipeline.delaunay3d(vtk_source)
-    edges = mlab.pipeline.extract_edges(delaunay)
-    if visualize:
-        mlab.pipeline.surface(mlab.pipeline.tube(edges,
-                                        tube_radius=0.001), 
-                colormap='gist_yarg',
-                vmin=-0.5*z.ptp(),
-                vmax=2*z.ptp(),
-                opacity=0.3)
-    lines = edges.outputs[0].lines.to_array()
-    return np.array([lines[1::3], lines[2::3]])
-
-
-def minimum_spanning_tree(adjacency):
-    """ Form a Minimum Spanning Tree using Prim's algorithm.
-    """
-    edge_list = list()
-    adjacency = adjacency.copy()
-    adjacency[adjacency == 0] = np.inf
-    nb_vertices = adjacency.shape[0]
-    indices = np.arange(nb_vertices)
-    unselected = np.ones(nb_vertices, np.bool)
-    unselected[0] = False
-    parents = np.zeros(nb_vertices, np.int)
-    while np.any(unselected):
-        this_adjacency = adjacency[np.logical_not(unselected)].T[unselected].T
-        u, v = np.lib.index_tricks.unravel_index(np.argmin(this_adjacency), 
-                                          this_adjacency.shape)
-        u = indices[np.logical_not(unselected)][u]
-        v = indices[unselected][v]
-        parents[v] = u
-        unselected[v] = False
-        edge_list.append((u, v))
-
-    return np.array(edge_list), parents
-
-
-def get_generation(parents):
-    """ Given parent indices, return ranks in generation. 
-    """
-    ranks = np.zeros_like(parents)
-    rank = 0
-    descendants = set(range(1, len(parents)))
-    # The great ancestor is the first one
-    this_generation = [0, ]
-    while descendants:
-        next_generation = list()
-        for vertex in descendants.copy():
-            if parents[vertex] in this_generation:
-                ranks[vertex] = rank
-                next_generation.append(vertex)
-                descendants.discard(vertex)
-        rank += 1
-        this_generation = next_generation
-    return ranks
-
-################################################################################
-if __name__ == '__main__':
-    mlab.figure(1, size=(720, 450), bgcolor=(1, 1, 1))
-    mlab.clf()
-
-    # Create a flat set of random points
-    N = 800
-    np.random.seed(10)
-    xyz = np.random.random((3, N))
-    xyz[-1, :] *= 0.1
-
-    # Compute and display their Delaunay tessalation
-    delaunay_edges = compute_delaunay_edges(xyz, visualize=True)
-
-    # Compute a minimum spanning tree
-    x, y, z = xyz
-    distances =   (x - x[:, np.newaxis])**2 \
-                + (y - y[:, np.newaxis])**2 \
-                + (z - z[:, np.newaxis])**2
-    edges, parents = minimum_spanning_tree(distances)
-    # Compute the rank (generation) of each point in the tree structure.
-    # We use it to give color to the tree.
-    ranks = get_generation(parents)
-    
-    # Display the points
-    pts = mlab.points3d(x, y, z, ranks, scale_factor=0.01, scale_mode='none',
-                                          colormap='jet',
-                                          name='Minimum Spanning Tree')
-    # Add the connections and display them with tubes.
-    pts.mlab_source.dataset.lines = edges
-    mlab.pipeline.surface(mlab.pipeline.tube(pts, tube_radius=0.004),
-                                                  colormap='jet')
-
-    mlab.view(180, -30, 0.7) 
-    mlab.show()

File overblown_version_for_reference/examples/perfpy/README.txt

-This directory contains the sources for the Performance Python article
-here:
-
- http://www.scipy.org/PerformancePython
-
-Essentially, a simple Gauss-Seidel/Gauss-Jordan iterative scheme to
-solve Laplace's equation is implemented in a variety of ways.  These
-are compared for sheer speed.  Presently the script compares, pure
-Python, pure Python + Psyco, Numeric, weave.blitz, weave.inline,
-fortran (via f2py) and Pyrex.
-
-
-Files:
-^^^^^^
-
- laplace.py -- A script to compare the various options.
-
- setup.py --  Builds the Fortran and Pyrex modules.
-
- src/ -- Contains the Fortran and Pyrex sources.
-
- src/flaplace.f -- Fortran source that is wrapped with f2py.
- 
- src/flaplace90_arrays.f90 -- Fortran90 version using array features. Wrapped with f2py.
-
- src/flaplace95_forall.f95 -- Fortran95 version using forall construct. Wrapped with f2py.
-
- src/pyx_lap.pyx -- Pyrex version.
-
- src/pyx_lap1.pyx -- Alternative Pyrex version contributed by Francesc
-                     Alted that works with Numeric and Numarray and
-                     presents a different way to access the array
-                     data.  This is not compiled by default but is
-                     there for reference.
-
- src/laplace.cxx -- Pure C++ version - just for kicks.  To compile it
-                    do something like this:
-                       g++ -O3 laplace.cxx -o lap
-
-
-Requirements:
-^^^^^^^^^^^^^
-
- Python, NumPy, SciPy (for weave), Pyrex and optionally Psyco.
-
-Usage:
-^^^^^^
-
- First run this:
-
-  python setup.py build_ext --inplace
-
- Then simply run this:
-
-  python laplace.py
-
- The C++ example in src/laplace.cxx can be run like so:
-
-  $ ./lap 
-  Enter nx n_iter eps --> 500 100 1e-16 [Return]
-  [...]
-
-
-Prabhu Ramachandran <prabhu_r at users dot sf dot net>
-Fortran90 and 95 versions contributed by Ramon Crehuet 
-<ramon.crehuet at iqac dot csic dot es>

File overblown_version_for_reference/examples/perfpy/laplace.log

-Doing 100 iterations on a 1000x1000 grid
-numeric took 6.11 seconds
-blitz took 4.73 seconds
-inline took 4.03 seconds
-fastinline took 3.55 seconds
-fortran77 took 3.55 seconds
-fortran90-arrays took 1.64 seconds
-fortran95-forall took 1.66 seconds
-slow (1 iteration) took 9.18 seconds
-100 iterations should take about 918.000000 seconds
-You don't have Psyco installed!

File overblown_version_for_reference/examples/perfpy/laplace.py

-#!/usr/bin/env python
-
-"""
-This script compares different ways of implementing an iterative
-procedure to solve Laplace's equation.  These provide a general
-guideline to using Python for high-performance computing and also
-provide a simple means to compare the computational time taken by the
-different approaches.  The script compares functions implemented in
-pure Python, Numeric, weave.blitz, weave.inline, fortran (via f2py)
-and Pyrex.  The function main(), additionally accelerates the pure
-Python version using Psyco and provides some numbers on how well that
-works.  To compare all the options you need to have Numeric, weave,
-f2py, Pyrex and Psyco installed.  If Psyco is not installed the script
-will print a warning but will perform all other tests.
-
-The fortran and pyrex modules are compiled using the setup.py script
-that is provided with this file.  You can build them like so:
-
-  python setup.py build_ext --inplace
-
-
-Author: Prabhu Ramachandran <prabhu_r at users dot sf dot net>
-License: BSD
-Last modified: Sep. 18, 2004
-"""
-
-import numpy
-from scipy import weave
-from scipy.weave import converters
-import sys, time
-
-msg = """**************************************************
-Please build the fortran and Pyrex modules like so:
-
-  python setup.py build_ext --inplace
-
-You will require f2py and Pyrex.
-**************************************************
-"""
-build = 0
-try:
-    import flaplace
-    import flaplace90_arrays
-    import flaplace95_forall
-except ImportError:
-    build = 1
-try:
-    import pyx_lap
-except ImportError:
-    build = 1
-if build:
-    print msg
-
-
-class Grid:
-    
-    """A simple grid class that stores the details and solution of the
-    computational grid."""
-    
-    def __init__(self, nx=10, ny=10, xmin=0.0, xmax=1.0,
-                 ymin=0.0, ymax=1.0):
-        self.xmin, self.xmax, self.ymin, self.ymax = xmin, xmax, ymin, ymax
-        self.dx = float(xmax-xmin)/(nx-1)
-        self.dy = float(ymax-ymin)/(ny-1)
-        self.u = numpy.zeros((nx, ny), 'd')
-        # used to compute the change in solution in some of the methods.
-        self.old_u = self.u.copy()        
-
-    def setBC(self, l, r, b, t):        
-        """Sets the boundary condition given the left, right, bottom
-        and top values (or arrays)"""        
-        self.u[0, :] = l
-        self.u[-1, :] = r
-        self.u[:, 0] = b
-        self.u[:,-1] = t
-        self.old_u = self.u.copy()
-
-    def setBCFunc(self, func):
-        """Sets the BC given a function of two variables."""
-        xmin, ymin = self.xmin, self.ymin
-        xmax, ymax = self.xmax, self.ymax
-        x = numpy.arange(xmin, xmax + self.dx*0.5, self.dx)
-        y = numpy.arange(ymin, ymax + self.dy*0.5, self.dy)
-        self.u[0 ,:] = func(xmin,y)
-        self.u[-1,:] = func(xmax,y)
-        self.u[:, 0] = func(x,ymin)
-        self.u[:,-1] = func(x,ymax)
-
-    def computeError(self):        
-        """Computes absolute error using an L2 norm for the solution.
-        This requires that self.u and self.old_u must be appropriately
-        setup."""        
-        v = (self.u - self.old_u).flat
-        return numpy.sqrt(numpy.dot(v,v))
-    
-
-class LaplaceSolver:
-    
-    """A simple Laplacian solver that can use different schemes to
-    solve the problem."""
-    
-    def __init__(self, grid, stepper='numeric'):
-        self.grid = grid
-        self.setTimeStepper(stepper)
-
-    def slowTimeStep(self, dt=0.0):
-        """Takes a time step using straight forward Python loops."""
-        g = self.grid
-        nx, ny = g.u.shape        
-        dx2, dy2 = g.dx**2, g.dy**2
-        dnr_inv = 0.5/(dx2 + dy2)
-        u = g.u
-
-        err = 0.0
-        for i in range(1, nx-1):
-            for j in range(1, ny-1):
-                tmp = u[i,j]
-                u[i,j] = ((u[i-1, j] + u[i+1, j])*dy2 +
-                          (u[i, j-1] + u[i, j+1])*dx2)*dnr_inv
-                diff = u[i,j] - tmp
-                err += diff*diff
-
-        return numpy.sqrt(err)
-        
-    def numericTimeStep(self, dt=0.0):
-        """Takes a time step using a numeric expressions."""
-        g = self.grid
-        dx2, dy2 = g.dx**2, g.dy**2
-        dnr_inv = 0.5/(dx2 + dy2)
-        u = g.u
-        g.old_u = u.copy()
-
-        # The actual iteration
-        u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + 
-                         (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv
-        
-        return g.computeError()
-
-    def blitzTimeStep(self, dt=0.0):        
-        """Takes a time step using a numeric expression that has been
-        blitzed using weave."""        
-        g = self.grid
-        dx2, dy2 = g.dx**2, g.dy**2
-        dnr_inv = 0.5/(dx2 + dy2)
-        u = g.u
-        g.old_u = u.copy()
-
-        # The actual iteration
-        expr = "u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + "\
-               "(u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv"
-        weave.blitz(expr, check_size=0)
-
-        return g.computeError()
-
-    def inlineTimeStep(self, dt=0.0):        
-        """Takes a time step using inlined C code -- this version uses
-        blitz arrays."""        
-        g = self.grid
-        nx, ny = g.u.shape
-        dx2, dy2 = g.dx**2, g.dy**2
-        dnr_inv = 0.5/(dx2 + dy2)
-        u = g.u
-        
-        code = """
-               #line 120 "laplace.py"
-               double tmp, err, diff;
-               err = 0.0;
-               for (int i=1; i<nx-1; ++i) {
-                   for (int j=1; j<ny-1; ++j) {
-                       tmp = u(i,j);
-                       u(i,j) = ((u(i-1,j) + u(i+1,j))*dy2 +
-                                 (u(i,j-1) + u(i,j+1))*dx2)*dnr_inv;
-                       diff = u(i,j) - tmp;
-                       err += diff*diff;
-                   }
-               }
-               return_val = sqrt(err);
-               """
-        # compiler keyword only needed on windows with MSVC installed
-        err = weave.inline(code,
-                           ['u', 'dx2', 'dy2', 'dnr_inv', 'nx','ny'],
-                           type_converters = converters.blitz,
-                           compiler = 'gcc')
-        return err
-
-    def fastInlineTimeStep(self, dt=0.0):
-        """Takes a time step using inlined C code -- this version is
-        faster, dirtier and manipulates the numeric array in C.  This
-        code was contributed by Eric Jones.  """
-        g = self.grid
-        nx, ny = g.u.shape
-        dx2, dy2 = g.dx**2, g.dy**2
-        dnr_inv = 0.5/(dx2 + dy2)
-        u = g.u
-        
-        code = """
-               #line 151 "laplace.py"
-               double tmp, err, diff;
-               double *uc, *uu, *ud, *ul, *ur;
-               err = 0.0;
-               for (int i=1; i<nx-1; ++i) {
-                   uc = u+i*ny+1;
-                   ur = u+i*ny+2;     ul = u+i*ny;
-                   ud = u+(i+1)*ny+1; uu = u+(i-1)*ny+1;
-                   for (int j=1; j<ny-1; ++j) {
-                       tmp = *uc;
-                       *uc = ((*ul + *ur)*dy2 +
-                              (*uu + *ud)*dx2)*dnr_inv;
-                       diff = *uc - tmp;
-                       err += diff*diff;
-                       uc++;ur++;ul++;ud++;uu++;
-                   }
-               }
-               return_val = sqrt(err);
-               """
-        # compiler keyword only needed on windows with MSVC installed
-        err = weave.inline(code,
-                           ['u', 'dx2', 'dy2', 'dnr_inv', 'nx','ny'],
-                           compiler='gcc')
-        return err
-
-    def fortran77TimeStep(self, dt=0.0):
-        """Takes a time step using a simple fortran module that
-        implements the loop in fortran90 arrays.  Use f2py to compile
-        flaplace.f like so: f2py -c flaplace.f -m flaplace.  You need
-        the latest f2py version for this to work.  This Fortran
-        example was contributed by Pearu Peterson. """
-        g = self.grid
-        g.u, err = flaplace.timestep(g.u, g.dx, g.dy)
-        return err
-
-    def fortran90TimeStep(self, dt=0.0):
-        """Takes a time step using a simple fortran module that
-        implements the loop in fortran90 arrays.  Use
-        f2py to compile flaplace_arrays.f90 like so: f2py -c
-        flaplace_arrays.f90 -m flaplace90_arrays.  You need
-        the latest f2py version for this to work.  This Fortran
-        example was contributed by Ramon Crehuet. """
-        g = self.grid
-        g.u, err = flaplace90_arrays.timestep(g.u, g.dx, g.dy)
-        return err
-
-    def fortran95TimeStep(self, dt=0.0):
-        """Takes a time step using a simple fortran module that
-        implements the loop in fortran95 forall construct.  Use
-        f2py to compile flaplace_forall.f95 like so: f2py -c
-        flaplace_forall.f95 -m flaplace95_forall.  You need
-        the latest f2py version for this to work.  This Fortran
-        example was contributed by Ramon Crehuet. """
-        g = self.grid
-        g.u, err = flaplace95_forall.timestep(g.u, g.dx, g.dy)
-        return err
-
-    def pyrexTimeStep(self, dt=0.0):
-        """Takes a time step using a function written in Pyrex.  Use
-        the given setup.py to build the extension using the command
-        python setup.py build_ext --inplace.  You will need Pyrex
-        installed to run this."""        
-        g = self.grid
-        err = pyx_lap.pyrexTimeStep(g.u, g.dx, g.dy)
-        return err
-
-    def setTimeStepper(self, stepper='numeric'):        
-        """Sets the time step scheme to be used while solving given a
-        string which should be one of ['slow', 'numeric', 'blitz',
-        'inline', 'fastinline', 'fortran']."""        
-        if stepper == 'slow':
-            self.timeStep = self.slowTimeStep
-        elif stepper == 'numeric':
-            self.timeStep = self.numericTimeStep
-        elif stepper == 'blitz':
-            self.timeStep = self.blitzTimeStep
-        elif stepper == 'inline':
-            self.timeStep = self.inlineTimeStep
-        elif stepper.lower() == 'fastinline':
-            self.timeStep = self.fastInlineTimeStep
-        elif stepper == 'fortran77':
-            self.timeStep = self.fortran77TimeStep
-        elif stepper == 'fortran90-arrays':
-            self.timeStep = self.fortran90TimeStep
-        elif stepper == 'fortran95-forall':
-            self.timeStep = self.fortran95TimeStep        
-        elif stepper == 'pyrex':
-            self.timeStep = self.pyrexTimeStep
-        else:
-            self.timeStep = self.numericTimeStep            
-                
-    def solve(self, n_iter=0, eps=1.0e-16):        
-        """Solves the equation given an error precision -- eps.  If
-        n_iter=0 the solving is stopped only on the eps condition.  If
-        n_iter is finite then solution stops in that many iterations
-        or when the error is less than eps whichever is earlier.
-        Returns the error if the loop breaks on the n_iter condition
-        and returns the iterations if the loop breaks on the error
-        condition."""        
-        err = self.timeStep()
-        count = 1
-
-        while err > eps:
-            if n_iter and count >= n_iter:
-                return err
-            err = self.timeStep()
-            count = count + 1
-
-        return count
-
-
-def BC(x, y):    
-    """Used to set the boundary condition for the grid of points.
-    Change this as you feel fit."""    
-    return (x**2 - y**2)
-
-
-def test(nmin=5, nmax=30, dn=5, eps=1.0e-16, n_iter=0, stepper='numeric'):
-    iters = []
-    n_grd = numpy.arange(nmin, nmax, dn)
-    times = []
-    for i in n_grd:
-        g = Grid(nx=i, ny=i)
-        g.setBCFunc(BC)
-        s = LaplaceSolver(g, stepper)
-        t1 = time.clock()
-        iters.append(s.solve(n_iter=n_iter, eps=eps))
-        dt = time.clock() - t1
-        times.append(dt)
-        print "Solution for nx = ny = %d, took %f seconds"%(i, dt)
-    return (n_grd**2, iters, times)
-
-
-def time_test(nx=500, ny=500, eps=1.0e-16, n_iter=100, stepper='numeric'):
-    g = Grid(nx, ny)
-    g.setBCFunc(BC)
-    s = LaplaceSolver(g, stepper)
-    t = time.clock()
-    s.solve(n_iter=n_iter, eps=eps)
-    return time.clock() - t
-    
-
-def main(n=1000, n_iter=100):
-    print "Doing %d iterations on a %dx%d grid"%(n_iter, n, n)
-    methods = ['numeric',  'blitz', 'inline', 'fastinline', 'pyrex',
-               'fortran77', 'fortran90-arrays', 'fortran95-forall']
-    for i in methods:
-        print i,
-        sys.stdout.flush()
-        print "took", time_test(n, n, stepper=i, n_iter=n_iter), "seconds"
-
-    print "slow (1 iteration)",
-    sys.stdout.flush()
-    s = time_test(n, n, stepper='slow', n_iter=1)
-    print "took", s, "seconds"
-    print "%d iterations should take about %f seconds"%(n_iter, s*n_iter)
-
-    try:
-        import psyco
-    except ImportError:
-        print "You don't have Psyco installed!"
-    else:
-        psyco.bind(LaplaceSolver)
-        psyco.bind(Grid)
-        print "slow with Psyco (1 iteration)",
-        sys.stdout.flush()
-        s = time_test(n, n, stepper='slow', n_iter=1)
-        print "took", s, "seconds"
-        print "%d iterations should take about %f seconds"%\
-              (n_iter, s*n_iter)
-
-
-if __name__ == "__main__":
-    main()

File overblown_version_for_reference/examples/perfpy/setup.log

-running build
-running config_cc
-unifing config_cc, config, build_clib, build_ext, build commands --compiler options
-running config_fc
-unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
-running build_src
-building extension "flaplace" sources
-f2py options: []
-f2py:> /tmp/tmpKurbwz/src.linux-x86_64-2.6/flaplacemodule.c
-creating /tmp/tmpKurbwz
-creating /tmp/tmpKurbwz/src.linux-x86_64-2.6
-Reading fortran codes...
-	Reading file 'src/flaplace.f' (format:fix,strict)
-Post-processing...
-	Block: flaplace
-			Block: timestep
-Post-processing (stage 2)...
-Building modules...
-	Building module "flaplace"...
-		Constructing wrapper function "timestep"...
-		  u,error = timestep(u,dx,dy)
-	Wrote C/API module "flaplace" to file "/tmp/tmpKurbwz/src.linux-x86_64-2.6/flaplacemodule.c"
-  adding '/tmp/tmpKurbwz/src.linux-x86_64-2.6/fortranobject.c' to sources.
-  adding '/tmp/tmpKurbwz/src.linux-x86_64-2.6' to include_dirs.
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpKurbwz/src.linux-x86_64-2.6
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpKurbwz/src.linux-x86_64-2.6
-running build_ext
-customize UnixCCompiler
-customize UnixCCompiler using build_ext
-customize GnuFCompiler
-Could not locate executable g77
-Could not locate executable f77
-customize IntelFCompiler
-Could not locate executable ifort
-Could not locate executable ifc
-customize LaheyFCompiler
-Could not locate executable lf95
-customize PGroupFCompiler
-Could not locate executable pgf90
-Could not locate executable pgf77
-customize AbsoftFCompiler
-Could not locate executable f90
-customize NAGFCompiler
-Could not locate executable f95
-customize VastFCompiler
-customize GnuFCompiler
-customize CompaqFCompiler
-Could not locate executable fort
-customize IntelItaniumFCompiler
-Could not locate executable efort
-Could not locate executable efc
-customize IntelEM64TFCompiler
-customize Gnu95FCompiler
-Found executable /usr/bin/gfortran
-customize Gnu95FCompiler
-customize Gnu95FCompiler using build_ext
-building 'flaplace' extension
-compiling C sources
-C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC
-
-creating /tmp/tmpKurbwz/tmp
-creating /tmp/tmpKurbwz/tmp/tmpKurbwz
-creating /tmp/tmpKurbwz/tmp/tmpKurbwz/src.linux-x86_64-2.6
-compile options: '-I/tmp/tmpKurbwz/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gcc: /tmp/tmpKurbwz/src.linux-x86_64-2.6/fortranobject.c
-gcc: /tmp/tmpKurbwz/src.linux-x86_64-2.6/flaplacemodule.c
-compiling Fortran sources
-Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-creating /tmp/tmpKurbwz/src
-compile options: '-I/tmp/tmpKurbwz/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gfortran:f77: src/flaplace.f
-/usr/bin/gfortran -Wall -Wall -shared /tmp/tmpKurbwz/tmp/tmpKurbwz/src.linux-x86_64-2.6/flaplacemodule.o /tmp/tmpKurbwz/tmp/tmpKurbwz/src.linux-x86_64-2.6/fortranobject.o /tmp/tmpKurbwz/src/flaplace.o -L/usr/lib64 -lpython2.6 -lgfortran -o ./flaplace.so
-running scons
-Removing build directory /tmp/tmpKurbwz
-running build
-running config_cc
-unifing config_cc, config, build_clib, build_ext, build commands --compiler options
-running config_fc
-unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
-running build_src
-building extension "flaplace90_arrays" sources
-f2py options: []
-f2py:> /tmp/tmpdxKfB1/src.linux-x86_64-2.6/flaplace90_arraysmodule.c
-creating /tmp/tmpdxKfB1
-creating /tmp/tmpdxKfB1/src.linux-x86_64-2.6
-Reading fortran codes...
-	Reading file 'src/flaplace90_arrays.f90' (format:free)
-Post-processing...
-	Block: flaplace90_arrays
-			Block: timestep
-Post-processing (stage 2)...
-Building modules...
-	Building module "flaplace90_arrays"...
-		Constructing wrapper function "timestep"...
-		  u,error = timestep(u,dx,dy)
-	Wrote C/API module "flaplace90_arrays" to file "/tmp/tmpdxKfB1/src.linux-x86_64-2.6/flaplace90_arraysmodule.c"
-  adding '/tmp/tmpdxKfB1/src.linux-x86_64-2.6/fortranobject.c' to sources.
-  adding '/tmp/tmpdxKfB1/src.linux-x86_64-2.6' to include_dirs.
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpdxKfB1/src.linux-x86_64-2.6
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpdxKfB1/src.linux-x86_64-2.6
-running build_ext
-customize UnixCCompiler
-customize UnixCCompiler using build_ext
-customize GnuFCompiler
-Could not locate executable g77
-Could not locate executable f77
-customize IntelFCompiler
-Could not locate executable ifort
-Could not locate executable ifc
-customize LaheyFCompiler
-Could not locate executable lf95
-customize PGroupFCompiler
-Could not locate executable pgf90
-Could not locate executable pgf77
-customize AbsoftFCompiler
-Could not locate executable f90
-customize NAGFCompiler
-Could not locate executable f95
-customize VastFCompiler
-customize GnuFCompiler
-customize CompaqFCompiler
-Could not locate executable fort
-customize IntelItaniumFCompiler
-Could not locate executable efort
-Could not locate executable efc
-customize IntelEM64TFCompiler
-customize Gnu95FCompiler
-Found executable /usr/bin/gfortran
-customize Gnu95FCompiler
-customize Gnu95FCompiler using build_ext
-building 'flaplace90_arrays' extension
-compiling C sources
-C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC
-
-creating /tmp/tmpdxKfB1/tmp
-creating /tmp/tmpdxKfB1/tmp/tmpdxKfB1
-creating /tmp/tmpdxKfB1/tmp/tmpdxKfB1/src.linux-x86_64-2.6
-compile options: '-I/tmp/tmpdxKfB1/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gcc: /tmp/tmpdxKfB1/src.linux-x86_64-2.6/fortranobject.c
-gcc: /tmp/tmpdxKfB1/src.linux-x86_64-2.6/flaplace90_arraysmodule.c
-compiling Fortran sources
-Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-creating /tmp/tmpdxKfB1/src
-compile options: '-I/tmp/tmpdxKfB1/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gfortran:f90: src/flaplace90_arrays.f90
-/usr/bin/gfortran -Wall -Wall -shared /tmp/tmpdxKfB1/tmp/tmpdxKfB1/src.linux-x86_64-2.6/flaplace90_arraysmodule.o /tmp/tmpdxKfB1/tmp/tmpdxKfB1/src.linux-x86_64-2.6/fortranobject.o /tmp/tmpdxKfB1/src/flaplace90_arrays.o -L/usr/lib64 -lpython2.6 -lgfortran -o ./flaplace90_arrays.so
-running scons
-Removing build directory /tmp/tmpdxKfB1
-running build
-running config_cc
-unifing config_cc, config, build_clib, build_ext, build commands --compiler options
-running config_fc
-unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
-running build_src
-building extension "flaplace95_forall" sources
-f2py options: []
-f2py:> /tmp/tmpLcFpVt/src.linux-x86_64-2.6/flaplace95_forallmodule.c
-creating /tmp/tmpLcFpVt
-creating /tmp/tmpLcFpVt/src.linux-x86_64-2.6
-Reading fortran codes...
-	Reading file 'src/flaplace95_forall.f95' (format:free)
-Post-processing...
-	Block: flaplace95_forall
-			Block: timestep
-Post-processing (stage 2)...
-Building modules...
-	Building module "flaplace95_forall"...
-		Constructing wrapper function "timestep"...
-		  u,error = timestep(u,dx,dy)
-	Wrote C/API module "flaplace95_forall" to file "/tmp/tmpLcFpVt/src.linux-x86_64-2.6/flaplace95_forallmodule.c"
-  adding '/tmp/tmpLcFpVt/src.linux-x86_64-2.6/fortranobject.c' to sources.
-  adding '/tmp/tmpLcFpVt/src.linux-x86_64-2.6' to include_dirs.
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpLcFpVt/src.linux-x86_64-2.6
-copying /usr/lib64/python2.6/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpLcFpVt/src.linux-x86_64-2.6
-running build_ext
-customize UnixCCompiler
-customize UnixCCompiler using build_ext
-customize GnuFCompiler
-Could not locate executable g77
-Could not locate executable f77
-customize IntelFCompiler
-Could not locate executable ifort
-Could not locate executable ifc
-customize LaheyFCompiler
-Could not locate executable lf95
-customize PGroupFCompiler
-Could not locate executable pgf90
-Could not locate executable pgf77
-customize AbsoftFCompiler
-Could not locate executable f90
-customize NAGFCompiler
-Could not locate executable f95
-customize VastFCompiler
-customize GnuFCompiler
-customize CompaqFCompiler
-Could not locate executable fort
-customize IntelItaniumFCompiler
-Could not locate executable efort
-Could not locate executable efc
-customize IntelEM64TFCompiler
-customize Gnu95FCompiler
-Found executable /usr/bin/gfortran
-customize Gnu95FCompiler
-customize Gnu95FCompiler using build_ext
-building 'flaplace95_forall' extension
-compiling C sources
-C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC
-
-creating /tmp/tmpLcFpVt/tmp
-creating /tmp/tmpLcFpVt/tmp/tmpLcFpVt
-creating /tmp/tmpLcFpVt/tmp/tmpLcFpVt/src.linux-x86_64-2.6
-compile options: '-I/tmp/tmpLcFpVt/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gcc: /tmp/tmpLcFpVt/src.linux-x86_64-2.6/fortranobject.c
-gcc: /tmp/tmpLcFpVt/src.linux-x86_64-2.6/flaplace95_forallmodule.c
-compiling Fortran sources
-Fortran f77 compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran f90 compiler: /usr/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops
-creating /tmp/tmpLcFpVt/src
-compile options: '-I/tmp/tmpLcFpVt/src.linux-x86_64-2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -I/usr/include/python2.6 -c'
-gfortran:f90: src/flaplace95_forall.f95
-/usr/bin/gfortran -Wall -Wall -shared /tmp/tmpLcFpVt/tmp/tmpLcFpVt/src.linux-x86_64-2.6/flaplace95_forallmodule.o /tmp/tmpLcFpVt/tmp/tmpLcFpVt/src.linux-x86_64-2.6/fortranobject.o /tmp/tmpLcFpVt/src/flaplace95_forall.o -L/usr/lib64 -lpython2.6 -lgfortran -o ./flaplace95_forall.so
-running scons
-Removing build directory /tmp/tmpLcFpVt
-running build_ext

File overblown_version_for_reference/examples/perfpy/setup.py

-from distutils.core import setup, Extension
-from Pyrex.Distutils import build_ext
-import os
-import numpy
-
-os.system('f2py -c src/flaplace.f -m flaplace')
-os.system('f2py -c src/flaplace90_arrays.f90 -m flaplace90_arrays')
-os.system('f2py -c src/flaplace95_forall.f95 -m flaplace95_forall')
-
-setup(
-    name = 'Laplace',
-    ext_modules=[ Extension("pyx_lap",
-                            ["src/pyx_lap.pyx"],
-                            include_dirs = [numpy.get_include()]) ],
-    cmdclass = {'build_ext': build_ext}
-    )

File overblown_version_for_reference/examples/perfpy/src/flaplace.f

-c File flaplace.f
-      subroutine timestep(u,n,m,dx,dy,error)