RealSense2 vs. RGBD VGA camera

Issue #295 resolved
Martin Dlouhy created an issue


do you know what is the motivation to downgrade sensors with wider field of view to existing simulated RGBD VGA camera (60deg FOV)? Note, that it is not only robotika model but also other already merged PRs:

Note, that our motivation was to have sensors similar to RealSense2 used in System Track (Team Robotika, robots MOBOS, Maria and Hermina).

Comments (12)

  1. Angela Maio

    Any sensors with new parameters have to be added to the documentation and become considerations as teams choose their robots. When evaluating sensor parameter changes, we considered whether the parameters could be supported by a real sensor datasheet, and whether the new sensor provided significant added value for competitors when compared to existing sensors. Some sensor parameters were altered to match the existing sensors to avoid expanding the number of sensor parameter considerations that competitors have to sort through unless the new sensor provided significant added capability.

    Can you provide a link to the datasheet for the RealSense model you use and explain the added capability of the RGBD camera you submitted vs. the existing VGA RGBD?

  2. Martin Dlouhy reporter

    Yes, it was part of the pull request, and that is why I am surprised to be re-opened:

    or directly:

    Depth Field of View (FOV):
    87°±3° x 58°±1° x 95°±3°
    Depth Output Resolution & Frame Rate:
    Up to 1280 x 720 active stereo depth resolution. Up to 90 fps.

    Note, that we used half resolution (640 x 360) to better match existing 640 x 480 to avoid simulation overhead.

  3. Angela Maio

    I appreciate the link. Looks like the RealSense camera has two different fields of view for depth (87deg) and RGB (69deg). We have to consider which FoV to use for the RGBD model since it only supports a single FoV.

    Can you also explain the added capability provided by the new RGBD camera model vs. the existing VGA RGBD?

  4. Martin Dlouhy reporter

    These parameters correspond to real camera we have on robots. We would like to maintain usefulness of simulated environment for development of real robots.

  5. Zbyněk Winkler

    We (robotika) would like to converge our System and Virtual solutions in the future (or at least have them as close as possible). Having the same sensor set (real and simulated) is a necessary step. I believe that is the motivation for the Virtual track, isn’t it? Having it as close to System track as possible?

    Also the Intel RealSense D435i camera is widely used sensor among the System Track participants - walking among the robots in Pittsburgh revealed it on many of them. It is fairly cheap (around $200) and for its price very capable sensor.

    Having a real sensor behind the definition of the simulated one has its own benefits as well. We can readily answer similar questions like we recently had about the parameters of the IMUs. If there is a question about some aspect of its performance, it can be either looked up in the datasheet or even a real world test can be performed to confirm what the correct behavior should be in the simulation. The same holds for RPlidars, Velodyne pucks etc. IMHO it would be great to approach RGBD cameras from the same angle.

  6. Angela Maio

    Thank you for the explanation. We recognize the value of modeling the real sensor as closely as possible and will accept the RealSense D435i as a new sensor parameter set. The sensor will be modeled using the 69-degree FoV from the RealSense D435i datasheet because it must apply for both RGB and depth data. I will make this change in PR #338

  7. Martin Dlouhy reporter

    In other words if we get the bigger limits we can restrict RGB ourselves, but if we get the lower ones then there is no solution.

  8. Angela Maio

    The lesser FoV was chosen to avoid a sensor model providing more data than its hardware counterpart. Wider depth FoVs are available with the lidar sensors, and wider camera FoVs are available with the new wide HD cameras.

  9. Zbyněk Winkler

    We use primarily the depth data from the D435i camera. Having smaller FOV on the depth data makes it different from the real sensor and as such precludes us from using it the same way. If you must, then a preferred solution would be to disable the RGB image and provide only the depth data, but with matching characteristics to the real sensor. As you noted, the RGB image can be replaced by a separate RGB camera.

    Besides, lidar sensors are completely different beasts. They are expensive (which is reflected in the budget) and have low vertical resolution. They thus need to be handled very differently and are not direct replacement of dense depth data.

  10. Log in to comment