Sign in or Register

Fictron Industrial Supplies Sdn Bhd
No. 7 & 7A,
Jalan Tiara, Tiara Square,
Taman Perindustrian Sime UEP,
47600 Subang Jaya,
Selangor, Malaysia.
+603-8023 9829
+603-8023 7089
Fictron Industrial
Automation Pte Ltd

140 Paya Lebar Road, #03-01,
AZ @ Paya Lebar 409015,
Singapore.
+65 31388976
sg.sales@fictron.com

Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing

29 May 2019
Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing
View Full Size
A sudden, tragic wake-up call: That’s how many roboticists view the Fukushima Daiichi nuclear accident, caused by the massive earthquake and tsunami that struck Japan in 2011. Reviews following the accident outlined how high levels of radiation foiled workers’ attempts to carry away urgent measures, such as operating pressure valves. It was the perfect mission for a robot, but none in Japan or elsewhere had the capabilities to pull it off. Fukushima forced many of us in the robotics community to realize that we needed to get our technology out of the lab and into the world.
 
Disaster-response robots have made significant progress since Fukushima. Research groups around the world have demonstrated unmanned ground vehicles that can move over rubble, robotic snakes that can squeeze through narrow gaps, and drones that can map a site from above. Researchers are also building humanoid robots that can survey the damage and perform critical tasks such as accessing instrumentation panels or transporting first-aid equipment.
 
But despite the advances, constructing robots that have the same motor and decision-making skills of emergency workers stays a challenge. Pushing open a heavy door, discharging a fire extinguisher, and other quick but tough work require a level of coordination that robots have yet to master.
 
One way of compensating for this limitation is to use tele-operation — having a human operator remotely control the robot, either continuously or during specific tasks, to help it complete more than it could on its own.
 
Tele-operated robots have long been applied in industrial, aerospace, and underwater settings. More urgently, researchers have experimented with motion-capture systems to transfer a person’s movements to a humanoid robot in real time: You wave your arms and the robot mimics your gestures. For a completely immersive experience, special goggles can let the operator see what the robot sees through its cameras, and a haptic vest and gloves can provide tactile sensations to the operator’s body.
 
At MIT’s Biomimetic Robotics Lab, our group is driving the melding of human and machine even further, in hopes of accelerating the development of practical disaster robots. With support from the Defense Advanced Research Projects Agency (DARPA), we are developing a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then catch that physical response and send it back to the robot, which helps it eliminate falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.
 
You could say we’re putting a real human brain inside the machine.
 
Upcoming disaster robots will usually have a great deal of autonomy. Someday, we wish to be able to distribute a robot into a burning building to search for victims all on its own, or position a robot at a damaged industrial facility and have it put which valve it needs to shut off. We’re nowhere near that level of capability. Hence the growing interest in teleoperation.
 
The DARPA Robotics Challenge in the United States and Japan’s ImPACT Tough Robotics Challenge are among the present efforts that have demonstrated the possibilities of teleoperation. One reason to have humans in the loop is the unknown nature of a disaster scene. Navigating these chaotic environments demands a high degree of adaptability that current artificial-intelligence algorithms can’t yet achieve.
 
For illustration, if an autonomous robot encounters a door handle but can’t find a match in its database of door handles, the mission fails. If the robot gets its arm stuck and doesn’t know how to free itself, the mission fails. Humans, on the other hand, can readily deal with such situations: We can adapt and learn on the fly, and we do that on a daily basis. We can determine variations in the shapes of objects, cope with poor visibility, and even figure out how to use a new tool on the spot.
 
The same goes for our motor skills. Consider running with a heavy backpack. You may run slower or not as far as you would without the extra weight, but you can still carry out the task. Our bodies can adapt to new dynamics with surprising ease.
 
The tele-operation system we are creating is not created to replace the autonomous controllers that legged robots use to self-balance and perform other tasks. We’re still equipping our robots with as much autonomy as we can. But by coupling the robot to a human, we take advantage of the best of both worlds: robot endurance and strength in addition to human versatility and perception.
 
Our lab has long explored how biological systems can inspire the design of better machines. A particular limitation of existing robots is their inability to perform what we call power manipulation—strenuous feats like knocking a chunk of concrete out of the way or swinging an axe into a door. Most robots are designed for more delicate and precise motions and gentle contact.
 
We designed our humanoid robot, called HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System), specifically for this type of heavy manipulation. The robot is relatively light—weighing in at 45 kilograms—and yet strong and sturdy. Its body is about 90 percent of the size of an average human, which is big enough to allow it to naturally maneuver in human environments.
 
Instead of using regular DC motors, we made custom actuators to power HERMES’s joints, painting on years of experience with our Cheetah platform, a quadruped robot capable of explosive motions such as sprinting and jumping. The actuators include brushless DC motors coupled to a planetary gearbox—so called because its three “planet” gears revolve around a “sun” gear—and they can generate a large amount of torque for their weight. The robot’s shoulders and hips are actuated directly while its knees and elbows are driven by metal bars connected to the actuators. This makes HERMES less rigid than other humanoids, able to absorb mechanical shocks without its gears shattering to pieces.
 
The first time we run HERMES on, it was still just a pair of legs. The robot couldn’t even stand on its own, so we suspended it from a harness. As a simple test, we set its left leg to kick. We grabbed the first thing we discovered lying around the lab—a plastic trash can—and placed it in front of the robot. It was satisfying to see HERMES kick the trash can across the room.
 
The human-machine interface we built for controlling HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. We call it the balance-feedback interface, or BFI.
 
The BFI took months and multiple iterations to develop. The initial concept had some resemblance to that of the full-body virtual-reality suits featured in the 2018 Steven Spielberg movie Ready Player One. That design never left the drawing board. We spotted that physically tracking and moving a person’s body—with more than 200 bones and 600 muscles—isn’t a straightforward task, and so we decided to start with a simpler system.
 
To work with HERMES, the operator stands on a square platform, about 90 centimeters on a side. Load cells measure the forces on the platform’s surface, so we know where the operator’s feet are pushing down. A set of linkages attaches to the operator’s limbs and waist (the human body’s center of mass, basically) and uses rotary encoders to accurately measure displacements to within less than a centimeter. But some of the linkages aren’t just for sensing: They also have motors in them, to apply forces and torques to the operator’s torso. If you strap yourself to the BFI, those linkages can apply up to 80 newtons of force to your body, which is sufficient to give you a good shove.
 
We set up two separate computers for controlling HERMES and the BFI. Each computer runs its own control loop, but the two sides are regularly exchanging data. In the starting of each loop, HERMES accumulates data about its posture and compares it with data accepted from the BFI about the operator’s posture. Based on how the data differs, the robot corrects its actuators and then immediately sends the new posture data to the BFI. The BFI then carries out a similar control loop to adjust the operator’s posture. This process repeats 1,000 times per second.
 
To let the two sides to operate at such fast rates, we had to condense the information they share. For example, rather than sending a detailed representation of the operator’s posture, the BFI sends only the position of the person’s center of mass and the relative situation of each hand and foot. The robot’s computer then scales these measurements proportionally to the dimensions of HERMES, which reproduces that research posture. As in any other two-way teleoperation loop, this coupling may cause vibration or instability. We decreased that by fine-tuning the scaling details that map the postures of the human and the robot.
 
To test the BFI, one of us (Ramos) volunteered to be the user. After all, if you’ve created the core parts of the system, you’re probably best equipped to debug it.
 
In one of the first tests, we tested an early balancing algorithm for HERMES to see how human and robot would behave when coupled together. In the test, one of the researchers used a rubber mallet to hit HERMES on its upper body. With every touch, the BFI exerted a similar jolt on Ramos, who reflexively shifted his body to regain balance, causing the robot to also catch itself.
 
Up to this point, HERMES was still just a pair of legs and a torso, but we fundamentally completed the rest of its body. We made arms that use the same actuators as those used by the legs and hands, made out of 3D-printed parts reinforced with carbon fiber. The head features a stereo camera, for streaming video to a headset worn by the operator. We also added a hard hat, just because.
 
In another round of experiments, we had HERMES punch through drywall, swing an axe against a board, and, with oversight from the local fire department, put out a controlled blaze using a fire extinguisher. Disaster robots will need more than just brute force, though, so HERMES and Ramos also performed tasks that demand more dexterity, like pouring water from a jug into a cup.
 
In each case, as the operator simulated performing the task while strapped to the BFI, we observed how well the robot mirrored those actions. We additionally looked at the scenarios in which the operator’s reactions could help the robot the most. When HERMES punched the drywall, for instance, its torso rebounded backward. Almost immediately, a corresponding force pushed the operator, who reflexively leaned forward, helping HERMES to adjust its posture.
 
We were set for more tests, but we knew that HERMES is too big and powerful for many of the experiments we wanted to do. Although a human-scale machine permits you to carry out reasonable tasks, it is also time-consuming to move, and it involves lots of safety precautions — it’s wielding an axe! Intending more dynamic behaviors, or even walking, proved difficult. We decided HERMES needed a little sibling.
 
Little HERMES is a scaled-down version of HERMES. Like its big brother, it uses custom high-torque actuators, which are affixed closer to the body rather than on the legs. This setting allows the legs to swing much faster. For a more compact design, we cut the number of axes of motion—or degrees of freedom, in robotic parlance—from six to three per limb, and we replaced the original two-toed feet with simple rubber spheres, each having a three-axis force sensor tucked inside.
 
Connecting the BFI to Little HERMES needed changes. There’s a big difference in scale between a human adult and this smaller robot, and when we tried to link their movements directly—mapping the position of the human’s knees and the robot’s knees, and so forth—it resulted in jerky motion. We needed a different mathematical model to mediate between the two systems. The model we came up with tracks parameters such as ground contact forces and the operator’s center of mass. It captures a sort of “outline” of the operator’s intended motion, which Little HERMES is able to execute.
 
In one experiment, we had the operator step in place, slowly at first and then faster. We were happy to see Little HERMES marching in just the same way. When the operator hopped, Little HERMES jumped too.
 
In a sequence of photos we took, you can see both human and robot in midair for a brief instant. We also placed pieces of wood underneath the robot’s feet as obstacles, and the robot’s controller was able to keep the robot from sliding.
 
Much of this was still preliminary work, and Little HERMES wasn’t freely standing or able to walk around. A supporting pole attached to its back prevented it from tipping forward. At some point, we’d like to develop the robot further and set it loose to amble around the lab and perhaps even outdoors, as we’ve done with Cheetah and Mini Cheetah (yes, it too has a little sibling).
 
Our next steps come with addressing a host of challenges. One of them is the mental fatigue that an operator experiences after using the BFI for extended periods of time or for tasks that call for a lot of concentration. Our experiments encourage that when you have to command not only your own body but also a machine’s, your brain tires quickly. The effect is specially pronounced for fine-manipulation tasks, such as pouring water into a cup. After repeating the experiment three times in a row, the operator had to take a break.
 



This article is originally posted on Tronserve.com

You have 0 items in you cart. Would you like to checkout now?
0 items
Switch to Mobile Version