Significant delay (lag) when resolving point data to odom-frame using truth odometry transform information

Issue #7 resolved
Bart Gallet created an issue


I am not sure if the title correctly summarizes the issue I experience with the VLP-16 simulator, nor am I sure of this issue really pertains to the VLP-16 per se.

I have a simulation of a differential drive robotic vehicle in Gazebo using the standard Gazebo ROS diff drive plugin. The diff drive plugin is producing a tf transform relating the base_footprint with the odom frame, using the "world" setting (i.e. it's not using a simulated wheel encoder, but instead it's using the truth information directly).

My simulated robot description has a VLP-16 attached on top of it (in an upright position) which is linked to the VLP-16 gazebo simulator plugin .

When I visualize the point cloud in RVIZ, (and I set the reference frame to odom), and I move the robot about (translate and rotate), I can see a lot of motion on the point clouds, while I would be expecting no motion on the points at all since I am using truth odometry-to-base_footprint transform : it's almost as if the data is lagging or as if it takes a very long time to resolve the point cloud in the odom frame.

Even when I significantly thin out the data (5Hz, 8 lasers, 100 samples), it doesn't appear to make a difference.

When I replace the VLP-16 with a Hokuyo 2D laser scanner, and leave everything the way it is, I don't have any lag or such issues. If I set the visualization decay of the points to say 1000 seconds, then I don't see any registration error at all - like a perfect SLAM (which is to be expected when using truth odometry data).

I have eliminated everything else and this behavior is only when I use the VLP-16 simulator.

A follow up question, maybe related: when looking at the tf-tree, I notice that the velodyne_base_link transform and the velodyne transform (is updated at 1000Hz. I have my joint publisher run at 50Hz, so I am not clear where this high rate comes from - and I am wondering if perhaps this is the culprit of my problem above?

Thanks in advance and looking forward to your response.


Comments (9)

  1. Max Schwarz


    I also encountered this bug. It is caused by a wrong timestamp: Gazebo's RaySensor generates a correctly-stamped message including scans. The ROS plugin however (and this is also the case for the ROS block laser plugin) uses the timestamp from the message ("LastUpdateTime"), but generates new data itself, which is then newer than the stamp.

    I have a working fix here, and will prepare a pull request tomorrow.


  2. Kevin Hallenbeck

    Use Gazebo LaserScan message instead of direct LaserShape access

    Previously, we would use the time stamp of the last LaserScan message (LastUpdateTime), but with newer data. Now, the timestamp and data are synchronized, since they come from the same LaserScanStamped message.

    This fixes #7.

    → <<cset 6acbf4869084>>

  3. Bart Gallet reporter

    That fixed it for me, thanks!

    I still notice that the TF tree for the velodyne links are updated at a very high rate (10KHz) compared to the 45Hz of my state publisher, as shown in the tf-tree below, which seems a little high to me, but I don't have any issues with it so far.


  4. Kevin Hallenbeck

    I see the same thing on my robot tf tree. The /tf topic is only published at 45 hz, so the 10kHz value appears to be an arbitrary frequency assigned to static transforms. This appears to be normal.

  5. Log in to comment