TimeSeriesHDF5() fails for more than 8185 steps

Issue #181 resolved
Nico Schlömer created an issue

When writing out data using TimeSeriesHDF5(), I inexplicably get error messages from HDF5, corrupting the entire file. On my machine, the code

from dolfin import *

mesh = UnitSquareMesh(10, 10, 'crossed')

V = FunctionSpace(mesh, 'CG', 1)

f = Expression('sin(pi*x[0]) * sin(pi*x[1])')

u = project(f, V)

u_file = TimeSeriesHDF5('/tmp/lossless_u')

t = 0.0
dt = 1.0e-2
u_file.store(mesh, t)
for k in range(8185):
    u_file.store(u.vector(), t)
    t += dt

gives

HDF5-DIAG: Error detected in HDF5 (1.8.11) MPI-process 0:
  #000: ../../../src/H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data
    major: Dataset
    minor: Write failed
  #001: ../../../src/H5Dio.c line 266 in H5D__pre_write(): not a dataset
    major: Invalid arguments to routine
    minor: Inappropriate type
[...]

Curiously, I found that magical number 8185 to be the limit on two different machines. Also, it is independent of the mesh size.

Another related issue is that errors in HDF5 merely get printed to the std::err, no actual exception is raised.

Comments (10)

  1. Nico Schlömer reporter

    So it is just the vector of time steps that creates the problem here? Certainly smells like it. The link you mention contains information of how to surpass this limit by using Dense Attribute Storage. Any chance of plugging this into DOLFIN? After all, 8185 isn't absurdly large.

  2. Chris Richardson

    It probably requires an extension to the low-level HDF5 interface to allow datasets which can expand ("unlimited dimension"). Another approach would be to tag each individual dataset of the TimeSeries with a time value.

  3. Log in to comment