Issue #68 new

Parallel mappings of data-generation

Matthew Turk
repo owner created an issue

== Imported Ticket == * Summary: Parallel mappings of data-generation * Component: yt * Milestone: 1.5 * Reporter: mturk * Owner: mturk * Resolution: fixed * Status: closed * Created: 1197342245000000 * Description:** Generating fields on large data-objects can be difficult and time-consuming. We should have a mapping in place, such that from an interactive IPython session we can farm out generation of a field to X number of processors, and be given back only that field. This will involve some domain decomposition, which I believe we can do very simply by generation of an extracted region for each processor of equal size. (The extracted region indices are roughly sorted by grid, so we would end up with a fairly equal mapping of task sizes between processors.)

IPython will almost certainly be our target platform for this, but NetWorkSpaces should also be considered.

This should be done in conjunction with Ticket #31, which will be done slightly differently. Additionally, it will require a 'dispatch' class, and more separation between data-identification and data-processing.

=== Update to Ticket === * Author: mturk * Changetime: 1216142392000000 * Field: status * Oldvalue: new * Newvalue: assigned

=== Update to Ticket === * Author: mturk * Changetime: 1216142392000000 * Field: milestone * Oldvalue: 2.0 * Newvalue: 1.5

=== Update to Ticket === * Author: mturk * Changetime: 1216142392000000 * Field: comment * Oldvalue: 1 * Newvalue:

=== Update to Ticket === * Author: mturk * Changetime: 1221660482000000 * Field: status * Oldvalue: assigned * Newvalue: closed

=== Update to Ticket === * Author: mturk * Changetime: 1221660482000000 * Field: resolution * Oldvalue: * Newvalue: fixed

=== Update to Ticket === * Author: mturk * Changetime: 1221660482000000 * Field: comment * Oldvalue: 2 * Newvalue: Done in r782, with support for grid-by-grid and 2d domain decomposition. It's included in the ParallelAnalysisInterface mixin.

Comments (0)

  1. Log in to comment