Orientation information of detected objects by logical camera requires rotation correction

Issue #138 resolved
Mukunda Bharatheesha created an issue

Hello, My colleagues and I have been using many of the ARIAC resources as a part of developing an online ROS course as a part of the ROSIN project. I have a simplified environment derived from the Gilbreth Project.

I am using a logical camera in our simplified setup. I observe that when I transform the detected object pose into the "world" frame, the position is correct, but the orientation requires an "anti-rotation" which is an opposite rotation used to attach the logical camera to the world. I observe that the orientation returned by tf.transformPose() is how the world would have moved after we applied all rotations to the logical camera using the Euler1 conventions.

Example1: If I use rpy=0 1.57 0 to attach my camera to world, then I need a static tf 0 0 0 0 1.57 0 logical_camera_frame correction_frame and apply this correction to the returned orientation from tf.transfromPose().

Example2: If I use rpy=0 1.57 -1.57 to attach my camera to world, then I need a static tf 0 0 0 -1.57 0 -1.57 logical_camera_frame correction_frame and apply this correction to the returned orientation from tf.transfromPose().

For the specific purpose of our course, it is sufficient to only use the position information. But I was wondering if I am making a mistake somewhere. If I understand the documentation of the logical camera correctly, I should simply be able to do tf.transformPose("world", object_pose_from_logical_camera) to get the correct pose and orientation of the object in the "world" reference frame.

I do have a URDF/XACRO where I have the "world" link defined so that TF knows about it.

Any thoughts or ideas about this?

Thanks.

P.S: I will classify the issue as an enhancement because I am not sure if this is a bug.

Comments (3)

  1. Deanna Hood

    Your expectation seems reasonable to me. Would you be able to provide a code snippet and its output, and highlight at which point it diverges from what you expect? I am interested in knowing if the transform of the camera is correctly reported including its orientation (it is from what I remember). This would help to know if it's an ARIAC bug or an issue with the TF usage.

    You haven't moved the camera after starting the simulation, have you? At least in the ARIAC setup, while the sensors can be repositioned in Gazebo manually, the TF info won't be updated accordingly, it will keep publishing the original pose of the sensor.

  2. Mukunda Bharatheesha reporter

    @d_hood: Thanks for your quick response.

    Your expectation seems reasonable to me. Would you be able to provide a code snippet and its output, and highlight at which point it diverges from what you expect?

    I first create a stamped pose from the pose reported by the logical camera:

      object_pose = geometry_msgs.msg.PoseStamped()
      object_pose.header.stamp = rospy.Time.now()
      object_pose.header.frame_id = "logical_camera_frame"
      object_pose.pose.position.x = 0.979212911586
      object_pose.pose.position.y = 0.0518878349772
      object_pose.pose.position.z = -0.000142648833088
      object_pose.pose.orientation.x = 0.00311230854277
      object_pose.pose.orientation.y = -0.705856878983
      object_pose.pose.orientation.z = 0.00291335850153
      object_pose.pose.orientation.w = 0.708341649397
    

    Then, I wait for transform between world and logical_camera_frame:

    tl = tf.TransformListener()
    tl.waitForTransform("logical_camera_frame", "world", rospy.Time.now(), rospy.Duration(1.5))
    

    Then, I transform the object pose to world frame with:

    object_world_pose = tl.transformPose("world", object_pose)
    

    The output of this is:

    object_world_pose
      seq: 0
      stamp: 
        secs: 146
        nsecs: 974000000
      frame_id: "world"
    pose: 
      position: 
        x: 1.2006371246091319
        y: 1.8518878349772
        z: 1.0207872852961652
      orientation: 
    **    x: 0.004260845703652504
        y: 0.0013588387159619896
        z: -0.00013898242153830917
        w: 0.9999899896674567**
    

    Hmmm! As I write this, I realise this is indeed the correct pose of the object in the world reference frame because it does have the right position and orientation.

    My mistake was setting this pose as the target to the robot end effector! (:( :( ) When I reported the issue, I was spawning conveyor objects with a yaw variance of pi/4 rad. And then, I couldn't immediately interpret the quaternions. But, now, I made the spawn yaw variance zero and then I see the immediately interpretable quaternion which is correct in the world reference frame! Sincere apologies for the noise!

    Just for completion.

    I am interested in knowing if the transform of the camera is correctly reported including its orientation (it is from what I remember).

    Yes, this is very much the case. TF correctly reports the camera transform when I run rosrun tf tf_echo world logical_camera_frame with the following output:

    - Translation: [1.200, 1.800, 2.000]
    - Rotation: in Quaternion [0.000, 0.707, 0.000, 0.707]
                in RPY (radian) [0.000, 1.570, 0.000]
                in RPY (degree) [0.000, 89.954, 0.000]
    

    You haven't moved the camera after starting the simulation, have you? At least in the ARIAC setup, while the sensors can be repositioned in Gazebo manually, the TF info won't be updated accordingly, it will keep publishing the original pose of the sensor.

    No I haven't moved the camera, but good to note that TF will not be updated if I move it.

    Thanks once again!

  3. Log in to comment