The overall goal of the project was to correct a person's body position using Baxter. More specifically, in our initial project proposal we planned to move someone’s arms into the YMCA poses, but after we began to work through the project we had to tone this down quite a bit. As a result, our goal turned into just getting Baxter to move someone’s arms into a T-Pose position as shown in the figure.
In order to complete our project we used Gazebo simulation software to model Baxter and the human participant. Because of the COVID-19 pandemic we were unable to complete any of this work in person. We created a poseable mannequin as a substitute for a human participant.
In order to accomplish our project goals this really required two main components: the mannequin's pose and a plan to move the mannequin’s arms. To acquire the mannequin's pose, we used Baxter's head camera and arUco markers. We chose to use arUco tags because they make it possible to get the three dimensional position of the tag using only one camera. They are also fairly well documented and there are libraries available to detect arUco markers. While arUco tags may be a bit cumbersome for a user in a real life scenario they offer a simple and well tested solution for defining the location of objects in 3D space using just one camera. In future iterations it may make sense to use a RealSense camera or IMUs for pose estimation.
Each aspect of the simulation/project design is further discussed below in the Project Details section.
There are several ways to get the pose of a person. Kinect, IMUs, or other sensors might work. We decided to use arUco tags from the OpenCV library. These are pre-defined tags that allow a single camera to find the frame of the tag relative to the camera in 3D space. Luckily enough there is already a ROS package for detecting arUco tags. In this design we ended up using the aruco_detect package. By subscribing to the fiducial_transforms topic published by the aruco_detect package, we were able to get the frame of the arUco tag on the mannequin's arm in the head_camera frame. This pose was then transformed to the base frame using a tf_buffer.
HumanBot is the name that we gave the mannequin. HumanBot is defined by a urdf.xacro file. This file defines his links, joints, and physics.
HumanBot’s arms defined to move up and down so that they can be relocated by Baxter. Additionally, as can be seen in the picture to the right, there are two arUco tags on the mannequin's arms. These are modeled as fixed joints onto the arm and allow the head_camera to detect the arm.
Finally, with this new pose from the aruco_detect package we were able to use the MoveIt package to handle the path planning and collision detection. The ultimate reason for using MoveIt instead of simply using an Inverse Kinematics solver was because initially we had planned to add resistance to HumanBot's arm and the path planner would allow for a sense of control as the robot continues to move the arms
There were a few initial challenges that we faced when defining HumanBot. One issue was that we had difficulty inserting the pattern mesh onto the aruco tags. This was solved when we figured out how to properly direct the URDF to the package where the mesh was located. Another problem that we had was that HumanBot's arms would flail when he spawned with no outside stimulus. This persisted despite altering standard parameters. It turns out that this was because when revolute joints are defined in URDF files the default to have no damping so HumanBot's arms were essentially behaving like oscillating springs. Once damping was added HumanBot's arms behaved as expected.
Currently HumanBot only has a single shoulder joint and is not accurate to real human biomechanics. So it does not really allow Baxter to position the mannequin into many different types of poses. In the future it would be best to add an elbow joint and maybe wrists in the future.
At the moment the only pose HumanBot can really do is a T-Pose or a Y-Pose, for future developments it would be better to create several configurations that the HumanBot could be placed in. Then, the user could input which configuration they would like. This would be more applicable in a real world application where someone would like to do different exercises.
The current implementation of HumanBot is a bit faulty with its physics implementation as defined by its urdf.xacro file. When Baxter moves the HumanBot's arm, it keeps its motion for quite some time, and if there were to be more joints in the future, this problem would most likely become harder to deal with. Perhaps it would be a good idea to find a human model previously made to use as a template for HumanBot.
MoveIt is a great PathPlanner because it can reliably find the solution to a target position. Though, for this specific application it is a bit dangerous if this design was to be actualized in real life. Some of the configurations that the MoveIt PathPlanner comes up with might hit the user as the end-effector reaches its final pose. This has been evidenced by the mannequin being knocked over during simulation. It would be nicer to have full control over how we are finding the solutions to the end-effectors path plan.
Copyright © 2020 Yoga Pose Robot - All Rights Reserved.
Powered by GoDaddy