The Future of Robotics
Written by André Schiele (Guest Author)
January 26, 2015
The future of robotics is distributed. Any complex robot is a distributed set of modules and systems, some autonomous, some semi-autonomous and some human controlled, all closely operating together to form a single cohesive system of interoperating parts. In telerobotics we seek to enable teleoperation of a robot and give the operator a human sense of being where the robot is being deployed, by providing video and forces of interactions back to the human operators.
For the ESA this means in space or on a hostile planet surface. The human operator may be on a space station or even back on earth!
To do this we enable telepresence through a combination of haptic force feedback to the operator combined with augmented reality video.
Providing both sight and touch feedback to the user enables telepresence. The more realistic and informational the feedback, the greater the telepresence perception of the operator, which leads to better remote robot operation.
The critical challenge to the ESA of space teleoperation is the communication link. There are many real-time control loops in robots, but the one between the human and the remote robot in a teleoperation system is the most problematic. Several control loops run over a packet-switched network link that exhibits the very worst of communication behaviors. It can disconnect at any time, latency can often be measured in seconds and vary wildly, plus packet loss is a regular occurrence; this is just as true for communication through the Internet as it is across space. Yet we have to send all the video over this link, time correlated with robot control status and control commands at both ends. This means that the link status has to become an integral part of the control loop, with sufficient information available at both ends for the operator to react appropriately and for the remote robot to sustain itself in a safe and stable state. Link state has to become an integral part of system state.
ESA’s development team is made up of mechatronic engineers, computer vision specialists, control system theorists and a few software engineers. It was clear to us at an early stage that we needed to seek a common method for discussing the system needs and interfaces, especially when asked to deliver complex distributed systems involving multiple computers, varying hardware and software platforms to operate on; we settled on data. We think about data as a means of understanding system state, as a means of representing video frames and as means of representing control theory. It thus made sense to approach our development environment and system development data-centrically. Coupled with the critical need to understand link status as part of teleoperation, we were inexorably led to assess DDS (Data Distribution Service). NASA was already a heavy user of DDS in its space communication systems – but could DDS also enable teleoperation over such a challenged link, enabling real-time control over what is indeed the most challenging of connections? This is new unexplored scientific territory, and all the more exciting because of that. Initial indications are extremely encouraging.
In this live demonstration at TEDx RheinMain RocketMinds, May 2014, I used our EXO-1 exo-skeleton arm prototype to remotely control a robot arm that was physically located in our ESA Telerobotics & Haptics lab over 500km away. My task was to pick up a metal pin and place it in a hole with a gap tolerance of less than 0.5 mm. The entire demo was run over an Internet-based 2G phone WAN connection because the planned 3G connection was unexpectedly unavailable! In the closed room, packed with people with phones in their pockets, bandwidth was severely limited, and in fact exhibited the type of connection experienced in space – or actually much worse. It can’t really get worse than that, teleoperating over a link that exhibited delays in the 15 seconds range with significant data loss. Still, DDS running over the 2G link managed to handle all the communication for the haptic feedback, the robot arm control loop and the video of the robot arm in the lab (streamed to both a tablet on my arm and cloned locally for display on the main TEDx screen).
It may not be in space yet, but precision telerobotic control over great distances and challenging datalinks with vision and haptic feedback is literally within our grasp and we have demonstrated this to a wide audience.
UPDATE: The Haptics-1 Experiment was conducted on-board the International Space Station by NASA Astronaut Barry Wilmore. A first pre-test was performed on 30.12.2014. Haptics-1 is the first ever robotic force-feedback experiment in space. Congratulations! read more
— ESATelerobotics (@ESATelerobotics) December 30, 2014
If you have a story about using Connext DDS that you'd like to share, email us at firstname.lastname@example.org.