Hi all --
I am trying to improve the quality of the stereo vision for the Multisense-SL/S7. We have flashed the most recent firmware and ros drivers. As you can see from the attached image the uncertainty for even textured scenes is very big even in distance ~3m from the surfaces. From the side view you can see huge waves in the cloud, and when the objects are small in size the results get worse.
I have some questions:
1. Is SGM used for the matching?
2. Why when we increase the resolution, the results become way worse?
3. Are the results that you see in the images reasonable or there is something wrong on what we do?
4. Is there any way to improve the matching (even if this is not in rough real-time)?
- I also attach a point cloud in 2048x544_128 format, 10fps.
Thanks a lot!