Robotics

Sensing human touch by soft robots

Cornell researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions

March 14, 2021
The Scitech
 

Cornell researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software. The group’s paper was published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper’s lead author is doctoral student Yuhan Hu.

The new ShadowSense technology is the latest project from the Human-Robot Collaboration and Companionship Lab, led by the paper’s senior author, Guy Hoffman, associate professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering. Rather than installing a large number of contact sensors – which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin – the team took a counterintuitive approach. In order to gauge touch, they looked to sight. “By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” Hu said. “We think there is interesting potential there, because there are lots of social robots that are not able to detect touch gestures.”

The prototype robot, designed by Petersen's Collective Embodied Intelligence Lab, consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot’s skin is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between six touch gestures – touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all – with an accuracy of 87.5 to 96%, depending on the lighting.

The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen. In addition to providing a simple solution to a complicated technical challenge, and making robots more user-friendly to boot, ShadowSense offers a comfort that is increasingly rare in these high-tech times: privacy.

(Source: Cornell University news release)