Create a scene generator

Issue #19 new
Adam Conkey created an issue

We want to be able to construct simulation scenes in Gazebo for arbitrary object meshes (YCB meshes to start with) and the equipment we have in our lab (robot, tables, cameras). This will support:

  1. Using pose data from a tracker or calibration on the real robot to recreate the same configuration in Gazebo. This can allow for testing different motions/strategies repeatedly on the scene before going live to the real robot. It can also help if you’re getting training data in simulation that will be used somehow in execution on the real robot.
  2. Random object configuration generation. Maybe you need to place a collection of objects randomly in clutter on the table, this will promote automating data collection.
  3. If teleoperation via virtual environments is ever re-visited, simulation scene generation from sensor data is precisely what is needed.

Comments (2)

  1. Adam Conkey reporter

    We should use YAML files to store calibration data, as is done here. This will help integrate with existing scene description code, and has the advantage that it can be easily loaded in both C++ and Python through an API, as well as through the ROS parameter server using the load functionality offered by launch files. I think it would be good to abstract this a bit from Gazebo so that we can also just build rviz scenes if desired (good for paper visualizations). I would say either make a new package, or re-purpose the scene description package (and maybe rename to ll4ma_scene_generator). The package should handle mesh/pose data association and random pose generation generically, and then have a Gazebo-specific wrapper that uses the spawn capability in the Gazebo-ROS API to actually load the object models to a running simulation. Also need to support API deletion of objects so the scene can be reset arbitrarily.

  2. Log in to comment