Point Cloud quality and waves

Issue #60 new
Dimitrios Kanoulas created an issue

Hi all --

I am trying to improve the quality of the stereo vision for the Multisense-SL/S7. We have flashed the most recent firmware and ros drivers. As you can see from the attached image the uncertainty for even textured scenes is very big even in distance ~3m from the surfaces. From the side view you can see huge waves in the cloud, and when the objects are small in size the results get worse.

I have some questions:
1. Is SGM used for the matching?
2. Why when we increase the resolution, the results become way worse?
3. Are the results that you see in the images reasonable or there is something wrong on what we do?
4. Is there any way to improve the matching (even if this is not in rough real-time)?

  • I also attach a point cloud in 2048x544_128 format, 10fps.

Thanks a lot!
Dimitrios

Comments (7)

  1. Chris Osterwood

    Dimitrious,

    I'm sorry that you're having quality issues with your S7. There are a few things I would recommend to improve 3D accuracy. But first, let me answer your questions.

    1. Yes, the matching algorithm is SGM. Our FPGA implementation uses a single pass, 4 direction approach which reduces system latency as a whole frame doesn't need to be buffered.
    2. Increased resolution can reduce 3D accuracy in some scenes. SGM propagates results from areas of high certainty (lower cost) into areas of lower certainty (higher matching cost) to increase the density of the resulting point cloud. Our implementation uses a fixed range for this propagation, so at higher resolutions the propagation is shorter (physically). In some scenes this reduces 3D accuracy, while in others the additional resolution helps to resolve smaller texture and increases 3D accuracy.
    3. Overall, these results seem reasonable. How far away were you from the panels and pipes?

    The first parameter I would adjust to increase accuracy is increase the "stereo post filter". If you are using ROS, this is available via dynamic reconfigure. Increasing the filter value will remove less certain results from the data stream. You will first see the draped edges of objects be removed and then areas of lower texture and higher matching cost.

    I would also recommend adjusting overall scene brightness. The reflective tape on the panel/box looks over-exposed and that is causing a loss of information in that region. Lowering the auto-exposure set point may improve this.

    Also, if the stereo head is moving, I would experiment with increase the imager gain and reducing the maximum exposure time that the auto exposure control loop is permitted to run at. This will reduce motion blur effects which increases accuracy and density.

    There is information about these parameters with our documentation:

    Can you upload a bag file with the following topics?

    • /multisense/calibration/device_info
    • /multisense/calibration/raw_cam_cal
    • /multisense/calibration/left/disparity
    • /multisense/calibration/left/image_rect_color

    Only a few frames are needed. It would help us take a closer look at your scene and this S7.

    Please let me know if you have any other questions.

    Best regards,

    Chris Osterwood

  2. Dimitrios Kanoulas reporter

    Hi Chris and thanks for the answer.

    The 'stereo post filter' did not seem to remove the waves in the cloud, whereas playing with the params also did not help. Note that we want to leave the system as generic as possible with respect to sun light, shadows etc in the scene.

    Have you ever tried any disparity post filtering such as: http://docs.opencv.org/3.1.0/d3/d14/tutorial_ximgproc_disparity_filtering.html#gsc.tab=0

    Even though it is in CV 3.0 I was wondering if you have ever tried in terms of quality and time complexity.

    Please find attached the .bag with the data you requested.

  3. Log in to comment