neck_ry appears miscalibrated

Issue #4 resolved
Stephen Hart created an issue

We are reading the neck ~45 degrees offset from reality coming in from the IHMC controller joint_states message. With the real robot looking slightly down (~10 deg), we are reading ~0.7 radians, resulting in artifacts like this :

neck_wrong.png

Is it possible there is a calibration step (or a hard-coded calibration) that we need to do (make)?

Comments (17)

  1. DouglasS

    @swhart115 Hey Steve. We use the q_out encoders on the upper body instead of the input encoders (which is the reason for the arm behavior during initialization; we're finding the hard stops to find the clocking offsets). But there isn't really a reliable way that we've found yet to push the neck to a hard-stop in the same way. It looks like we have in fact hard-coded our output encoder offset for neck_ry, which will definitely be different on a robot-to-robot basis. What if we exposed this offset as a ROS Parameter that you could tune on-line until things look acceptable? The neck joint is very hard to calibrate using hard-stops (mostly because even the mechanical stops are actually soft; we've spun the head around 180 before).

  2. btshrewsbury NA

    When are you planning on shipping your robot out? We are working on the fix, but it probably wont be ready until tomorrow.

  3. Stephen Hart reporter

    We're packing up tomorrow morning.

    If it helps, the current neck offset that we use is -2.57877 radians (the mapping from neck output-side encoder to "truth"). Any way to just hard-code a version of the code with this in there?

  4. Stephen Hart reporter

    @dljsjr We could totally hack the neck offset for now if you told us what your bias is.

  5. Stephen Hart reporter

    Great, we were able to hack these numbers together to get lined up laser data. Unfortunately, we're running into out of sync timestamps through the full pipeline. Although the control code and the head are now syncing up, they published topics seem to be out of sync with ros time. This is causing some problems in our map/cloud reconstruction because \tf is getting confused with extrapolation problems between joint data, laser data, and camera data. Ideally, all of the ros topics coming out of the ihmc_msgs nodes are in sync with each other, then our perception and control code should be more happy.

  6. btshrewsbury NA

    @swhart115 Can you send us a photo helping us visualize the problem or send us a rosbag that we can use to reproduce the problem? The Ros data should be slightly in the past due to the communication delays. We tried to set something up, but we couldn't see any huge lag between wall and published msg times.

  7. Stephen Hart reporter

    @btshrewsbury Its nothing visual as RViz actually seems to do ok. It's just TF spew in the terminal. We unfortunately, had to start packing the robot up to ship in the morning, so I might have missed my chance to get a rosbag. sorry about that :/

  8. DouglasS

    @swhart115 Is there anything we can do to reproduce this? We have our robot for another day or two. We did a pretty rudimentary test by echo'ing /tf, /atlas/outputs/rootPose, and /multisense/lidar_scan and all of that seemed to show the same clock source in their header stamps.

  9. Frank Weng

    @dljsjr The problem is that it does not align with ROS (linux) time.

    To see from python:

    rospy.init_node('test')
    print rospy.Time.now()
    

    Or linux time on the command line:

    $ date '+%s'
    
  10. DouglasS

    @GreatSuccess Are you guys using multiple roscores? Unless you are running multiple roscores or you explicitly set the Multisense's network sync rosparam to false then the multisense's clock should use the roscore's time for its timestamp. Multi-roscore operation cannot be resolved by us; you need to implement an NTP scheme. And our code does require that the network sync rosparam for the head to be set to true.

  11. DouglasS

    @swhart115 @GreatSuccess

    Here are a few screenshots that I've taken. I'm still unable to reproduce your anomaly. You'll see, in order from top to bottom, the printout of the timestamps from /atlas/outputs/rootPose/header/stamp, /multisense/lidar_scan/header/stamp, and a test ROS node that is just constantly printing rospy.Time.now(). One screenshot shows network_time_sync set to false, and the other shows network_time_sync set to true.

    Network sync false: timing_net_sync_off.png

    Network sync true: timing_net_sync_on.png

  12. Frank Weng

    We do not explicitly set the network_time_sync param, but see results similar to the first screenshot.

    Too bad our Atlas is already in a box, so we won't be able to do full-system tests anymore.

  13. DouglasS

    @GreatSuccess That's probably what it is, then. I was under the impression that the heads default this to true, but I could be wrong.

  14. DouglasS

    @GreatSuccess @swhart115 I will be closing this ticket for now since we've clarified the networking sync parameters, and since the new Atlas robot no longer has pre and post transmission encoders that require a calibration offset due to the switch to an electric neck. If this crops up again, please open a new bug report and we'll take a look.

  15. Log in to comment