While assistive devices like powered wheelchairs have greatly enhanced the quality of life for individuals with disabilities, caregivers are still relied on for physical manipulation tasks like meal preparation or personal hygiene. Assistive robotic arms hold great promise in these domains.
However, the control of a robotic arm nominally exists in a higher-dimensional space (6D) than a powered wheelchair (2D), which makes the control more complex. To teleoperate such a complex machine with standard 2- and 3-axis joysticks already is a challenge. For more limited control interfaces, like a switch-based head array or sip-and-puff, direct teleoperation become untenable. Our goal in this work is to make assistive robotic arms easily operable using such limited control interfaces, by introducing robotics autonomy.
For this platform, the bulk of our research investigates how the autonomy should share control with the human and provide assistance. We also have developed perception algorithms able to segment objects and identify human-like grasp locations .
© argallab 2016