Support for 3d arrays (or ndarrays if possible)
It would be great if blaze could support image analysis in multiple dimensions, like blitz++ does. Its already very well suitable for 2D image analysis, but nowadays 3D would be essential. In the long run, possibly 4D would be great (3D multichannel image or 3D time series), but this might be supported using a vector of 3D arrays.
I've already seen https://bitbucket.org/blazelib/blaze/issues/34/supportforndarrays but it's not sufficient for my use case. I would like to implement filters (etc) on ND arrays.
Comments (13)


reporter I was a longterm blitz++ user, so I'm used to having some basic algebra on Ndimensional arrays and vectors. In principle 3D is sufficient for my image analysis tasks. This might generally be the case because higher dimensions usually resemble time, color, viewing angles or the like, and often require to be handled separately anyways. Of course in the long run, ND would be great :)
For my use case, I only require basic linear algebra (add, subtract, elementwise multiplication) involving two or more dense arrays or a dense array and one or more constant(s).

Hi, to state it explicitly so that it is written down, nDarrays could also be useful for multilinear algebra, like for tensor arithmetics.
Best regards, Fabien

 the basic operations like eigen3 unsupported tensor module and Armadillo, for example, contraction and so on
 Einstein notation I can help more if you need it

Some proposal
A presentation Towards a HighPerformance Tensor Algebra Package for Accelerators
like this
//from mshadow: https://github.com/dmlc/mshadow/blob/master/mshadow/tensor.h struct cpu { /*! \brief whether this device is CPU or not */ static const bool kDevCPU = true; /*! \brief device flag number, identifies this device */ static const int kDevMask = 1 << 0; }; /*! \brief device name GPU */ struct gpu { /*! \brief whether this device is CPU or not */ static const bool kDevCPU = false; /*! \brief device flag number, identifies this device */ static const int kDevMask = 1 << 1; }; template<size_t dimension> struct Shape { /*! \brief dimension of current shape */ static const size_t kDimension = dimension; /*! \brief dimension of current shape minus one */ static const size_t kSubdim = dimension  1; /*! \brief storing the dimension information */ size_t shape_[kDimension]; /*! \brief default constructor, do nothing */ BLAZE_ALWAYS_INLINE Shape(void) {} /*! \brief constuctor */ BLAZE_ALWAYS_INLINE Shape(const Shape<kDimension> &s) { #pragma unroll for (int i = 0; i < kDimension; ++i) { this>shape_[i] = s[i]; } } BLAZE_ALWAYS_INLINE size_t &operator[](size_t idx) { return shape_[idx]; } template<typename TT // Tensor Type , size_t DIM // Tensor Dimension , typename Device> struct Tensor { using TensorType = TT; Shape<DIM> shape; BLAZE_ALWAYS_INLINE TensorType & operator~() noexcept { return *static_cast<TensorType*>( this ); } BLAZE_ALWAYS_INLINE const TensorType& operator~() const noexcept { return *static_cast<const TensorType*>( this ); } }; template< typename MT // Type of the matrix , bool SO > // Storage order struct Matrix : Tensor<MT, 2, cpu> {};

reporter Hi Klaus, blaze is evolving nicely, great work. I'm just checking in again if there is progress on the 3D (or nD) support? How difficult would it be to add 3Dsupport in your eyes? All the best, Mario

Hi Mario!
Unfortunately there is no progress yet, but the topic is pretty high on our priority list. The problem is not the difficulty of the task, but rather the amount of work associated with it (new data structures, new algorithms, new views, ...). I cannot give any ETA, but I can promise that we will address this issue within the next couple of releases.
Best regards,
Klaus!

reporter Great, thanks for your very nice work!

Hi Klaus, Love your amazing work. I have one thing to suggest.
One of my colleagues use ndim(n > 2) arrays a lot for his signal processing research. A thing to note though, is that instead of generic ndim arrays in numpy, arrayfire sense, He uses ndim arrays as 'batches' in order to process N concurrent matrix multiplications and such. I propose something like batch<DynamicVector>, batch<DynamicMatrix> would signify obviously parallelizable types and operations. Generic Ndim arrays would obviously be a lot useful, However something like the batch type would have clear semantics, and hopefully less work. What do you think?
Your work is much appreciated, Ray

Hi Ray!
Thanks for sharing the idea and for creating this issue. We believe that your idea of batch processing is orthogonal to the idea of ndimensional data structures. Therefore we have created issue #185 to preserve the idea and to make it possible to track the progress. Thanks again for sharing,
Best regards,
Klaus!

FWIW, we have started to work on implementing 3D data structures here: https://github.com/STEllARGROUP/blaze_tensor. This work is in the early stages at this point but will definitely be continued to maturity. We would also be happy to contribute everything back to Blaze (given sufficient interest).
Regards Hartmut

Thanks Hartmut for this great addition to Blaze. Since coordinating our efforts will definitely take some time, I will initially add this as a Blaze project to the main page to make it known to as many Blaze users as possible. Thanks a lot!

reporter Dear Hartmut, we are still quite interested but its currently unclear if we have spare resources. In the meantime I'll keep an eye on your work and send you lots of good karma :)
 Log in to comment
Hi Mario!
Thanks a lot for the suggestion. We agree that this is an important feature and therefore will address the issue in the near future.
Since you have a specific application in mind, could you please sketch which operations you would require for 3D arrays?
Best regards,
Klaus!