The following article represents one person’s view to man’s role in the future. Keep mindful that this article was published 4 years ago. That’s a long time by today’s standards.
Why Robots Still Need Us
by Hope Reese, 10-13-2015
The rise of smart machines is impossible to deny—with driverless cars, self-checkout stations, and drone deliveries, more and more tasks are becoming automated. But while many are troubled by this rapid integration of machines into the workplace, MIT professor David A. Mindell sees it as a largely positive development.
Author of Our Robots, Ourselves: Robotics and the Myths of Autonomy, Mindell draws on more than twenty years of experience with robots to argue that humans will remain in charge of the machines. Mindell is a professor of aeronautics and astronautics at MIT, and worked as an engineer in far corners of the globe, on land and deep undersea, viewing firsthand the way that robots can provide material and perform tasks that no human can.
But while our future tasks may be forced to shift, Mindell sees humans remaining at the helm of operations. Humans, he believes, serve a vital role in analyzing and interpreting data. So instead of worrying about how robots will replace humans, he says, it is the relationship between human and machine that count.
Mindell now runs a company called Humatics that helps engineer systems to be coupled with human environments, applying some of the lessons he learned undersea to maritime, aviation, and other environments. TechRepublic spoke with Mindell about how he sees robots threatening our sense of identity, why he doesn’t believe in full autonomy, and how humans and robots will work together in the future.
What myths do we have about these machines?
There are three main myths. One is the myth of replacement. That you can have a person standing in an assembly line, put a robot in their place, and the robot does that same job. Then there’s the myth of linear progress. That somehow we are moving from human tasks, to remote tasks, to full autonomy. That it’s the natural progression of the technology. The final myth is that full autonomy is the highest level of the technology. That machines act independently in response to the environment.
You don’t think that robots will ever act autonomously?
There’s really no such thing as full autonomy. You can always find a wrapper of human intention and direction that goes around even the most autonomous robots. It’s always interacting with the human world in some way or another. The highest expression of the technologies are the ones that work most deeply, fluidly, with human beings. That’s the challenge that we’re going to be setting our minds to.
Extreme environments have been sites of great progress in our use of robots. How does context matter in innovation?
I started working in the deep ocean, and I don’t think there is any of the fundamental difference in the dynamics, certainly with human beings and social dynamics. The technologies are very similar to what they would be in your office, or your kitchen, or your automobile. But extreme environments have been forced to adopted robotics, 10, 20, 30 years before some of the more ordinary environments, just because they have to. There’s no other way to explore the deep ocean. There’s no other way to explore Mars right now.
We can look at these extreme environments as laboratories, or little dramas. They are easier to study because they don’t have all the mess and business of daily life. Out on the Moon you have a few people and a few robots. You can tell a good story based on that. Even on the Moon. Even on Mars. Even with the New Horizon’s Mission to Pluto that was in the news this summer. You can see the human contact for these autonomous robots and how the autonomy is shot through with human design and intention. If it’s true there, it’s going to be just as true in a factory, or on a highway, or in an operating room where there are a lot more people physically around.
You talk about the “cultural significance of the body”—the importance we place on having experiences in a physically tangible way. Can you elaborate?
While these robots don’t usually replace people one-for-one, they do move people around, through networks. You can sit in Houston and explore the Moon. You can sit on the surface of the ocean and explore the bottom of the ocean. You can sit in an auditorium somewhere and experience another planet. I don’t trivialize that change. It’s still an important change, because where your body is matters. Warfare is the classic example. We have remote warriors operating from trailers in the desert in Las Vegas or in other places, flying missions, and in some cases, shooting and killing thousands of miles away. Cognitively, they are more engaged in what’s going on than an airplane pilot who’s flying directly over the battlefield. The body being at risk is something that matters in the cultural construction of warfare. It really changes their status in the world in ways that we haven’t even figured out yet.
It’s also partly generational. People under 40 may not be so persuaded that remote experience can’t be equal to or superior to real experience. We’ve come a long way with remote experience in the last couple of decades and are only going further.
People worry that robots will replace them. Will only the very smartest people have jobs?
I don’t think only the smartest people will have jobs. But we need to keep thinking about what people do that’s special. What that is changes from decade to decade. Today people are good at interpreting imagery. They’re certainly good at drawing a lot of different judgments together. They’re good at relating to other people.
We’re pretty close to a world where one of the main functions of an airline pilot is to couple the passengers to the system and give you confidence that there’s a human to work through problems. That’s a very important job, right? We’re going to need people where there are people.
How can we start rethinking our roles at work, in light of this?
It’s not so much rethinking as a more accurate thinking. You often hear, “Well, people are the biggest problem. All our train crashes are due to human error. All our airplane crashes are due to human error. Let’s get rid of the people.” That is a misunderstanding of the accident record.
Most of the time that there are human errors, there are also other errors in the system. Often, the system wasn’t designed in such a way as to prevent errors. The labor system was pushing people too hard to work too many hours. At the same time you hear that people are the weak link in the system—but that just makes invisible all of the places that humans are doing things that are the strong links. The reason they are invisible is because they prevent accidents, so they often go unnoticed.
What about the Air France crash? Why should we think of it in a way beyond the human error?
The Air France crash was very tragic. The airliner hit a little bit of rough weather, and the tubes that sense the air speed got ice on them and iced up, which is a problem, but not catastrophic. In response, the computer flying that airplane had been told if you can’t get data from those tubes, you’ve got to check out and fail all together. So the computer checked out and handed control of this airliner to the crew members in a very challenging situation. They were probably a little fatigued and a little distracted. They had to try to get themselves back into the situation, understand what was going on, and troubleshoot the problem quickly.
Ironically, after about a minute the problem that initially caused the ice on the tubes cleared itself and the crew was dealing with a perfectly good airliner. The pilots became confused. They couldn’t handle the situation and didn’t respond, the two of them, in the same way. They ended up losing control of the airplane. It’s a story about losing your skills at flying the airplane by relying on automation. It’s a story about a handoff. The way the human and computer need to learn how to exchange control.
Why don’t you see fully-automated cars in our future?
If you’re sitting in a fully-automated car, and you’re sleeping and you’re going down the highway at 80 miles-per-hour and there’s a problem, you’ve got to wake up and get involved and tame the situation pretty quickly.
How many times do you drive by and the stop sign has been knocked over by a truck 20 minutes before? You still know there’s a stop sign there, so you stop. Even if a traffic light is broken, people tend to be pretty good at getting through the intersection safely. Just by looking at each other. There are thousands and thousands of places where people are sort of the glue that hold these things together because we’re so good at compensating for those errors. Machines tend to be more brittle in those ways, less adaptable to those situations.
You want to keep the people involved so that if there are problems—because there will be problems—they are ready to take over control.
You focus on the concept of autonomy instead of artificial intelligence. What’s the difference?
Artificial intelligence covers lots of different things. Speech recognition systems, credit card processing, and a lot of things that aren’t the kind of things that I’m talking about. I’m mostly talking about robots and things that move around in the physical world.
You can say that autonomy is the artificial intelligence that might live inside a robot. A.I. is kind of a loaded term. It’s had a lot of debate around it for a long time. I use “autonomy” because it’s the word that people in robotics use these days. And I don’t make the argument about intelligence, per se. There’s a long history and interesting debate about intelligence. What it is? Who has it? Who doesn’t have it? I’m trying not to take on that whole story. I don’t take a position on whether these machines are intelligent or not. What they are not is inhuman.
Are you avoiding questions about whether robots will ever start thinking—and if so, how they will think? Will they ever think beyond the human level?
That’s not a topic I’m picking up. If I have a camera on my car that has artificial intelligence, and that camera can recognize pedestrians walking on the road, that’s a great accomplishment of technology. I just want it to tell me what those pedestrians are and give me that information so I can work with it. Not just push me away. I’ll take all the intelligent machines you can make. It’s really a matter of how we present the system and how we design the person into the system.
It can be just as high tech. I think, higher tech actually, when it’s really working well with a person. The full autonomy problem is an easier problem. It just doesn’t work in a social context.
In deep-sea exploration, robots serve as the eyes and to gather data. But it was humans who actually analyzed and understood what they were seeing. The robot is very good at mapping, collecting data, and those are all tremendous engineering accomplishments. It works best when it presents the landscape, such as on Pluto, to the scientific mind. The scientist then travels through it and begins to feel present in it and thinks about what they are seeing there.
You’re optimistic that humans will always be essential. Is there anything about where we’re heading with robots that worries you?
Poor design worries me a great deal. You’re much more likely to get killed by a robot that’s poorly designed then by a robot that is evil. One of the main sources of poor design in robotics is a misunderstanding, in some cases a total lack of concern or understanding for the social environment in which robots are placed. The predator drone, for example, is very stressful for the operators to use. It was designed for something entirely different than what it’s being used for, and had to be shoehorned by users into its current application. That poor interface has killed people. It has destroyed property just because of the errors that it induced.
If you design something for full-autonomy, as the predator drone was designed for, it inevitably ends up having that full-autonomy mediated by people at various points. If you don’t design for that, you end up with a band-aided, mashed up system, which is what the Predator is, as opposed to an elegant, well thought out system that really collaborates with the people. How are people going to relate? How will machines relate to a social environment?
I worry about a world where robots are out there doing things out of sync with their environments. That’s why we don’t have robots in our home. Nobody has yet come up with a robot that lives with you in a collaborative way. I don’t think that’s impossible, but no one has cracked it yet.