In the end we were able to successfully use Baxter to move the HumanBot’s arms. However, there are many improvements to be made as mentioned at the end of the Design section. The final product used manual inputs to move the HumanBot’s arms, but we were able to find the exact location of the aruco_tag as shown in the next section.
In the video, it first shows in RVIZ how Baxter’s head_camera can target and get the pose of an aruco tag. This is shown in the bottom left corner of the screen. Using the pose of the tag, Baxter can then plan his movement to the HumanBot and in the case of this demonstration, Baxter lifts HumanBot's arm into a T-Pose position. After the movement is finished, the pose of the aruco_tag, shown in RVIZ defined in the world frame of Gazebo, is shown to have the same pose as the calculated pose in the terminal. The x value is correct, the y value is negated, and the z value is off by 0.93. This z offset is due to the fact that Baxter's base frame is raised by 0.93 meters in z from the origin.
Copyright © 2020 Yoga Pose Robot - All Rights Reserved.
Powered by GoDaddy