If Robots Want to Work with Us, We Must Fix Four Problems
Released on 04/24/2017
It seems weird, I know,
but in the very near future,
you might have robotic arm handing you beer,
and that means we need to start thinking about
how exactly we're going to be interacting
with these kinds of machines.
The good news is, some very smart people
are working on this very problem,
like UC Berkeley roboticist, Anca Dragan.
So may we present to you Anca's
four biggest problems in human robot interaction.
Problem number one is enabling robots
to anticipate human behavior.
When we work together or interact as humans,
we have a pretty easy time anticipating
what's gonna happen next,
and we'd like for robots to do the same.
So say I'm working with this arm from Universal Robots.
I want it to hand me this screwdriver.
It can go about that one of two ways,
either pointy bit first or the other way.
That would actually be more natural
for me to go into my next task.
Robots can sort of plan and say,
Well, I should give it this way
because then the natural way for the person
to give it also works out.
So this is a beautiful example
of how robots can gently guide us to perform better
in the tasks that we need to perform.
Problem number two is transparency,
but interaction is a two way street,
so people will need to anticipate
the robot's behavior, as well.
If I see you carrying something,
I can figure out about how heavy it is, right?
If this were much heavier, I'd hold it in a different way.
So there's a lot of inferences
that we people make when we watch other people,
and we'd like it to be the case that people
can make the same inferences about robots.
The thing about robots is,
they don't know their own strength,
so what's clever about this robot
is it'll actually stop if it makes contact with a human.
A robot lifting a container full of water
and a container full of lead is one and the same,
so if they're going to be handing objects over to humans,
they need to be able to communicate the weight of things,
like they might have to slow down or pretend to struggle.
We've found that even something as subtle as the timing
of the robot's motion has such a big influence
in terms of what people will end up reading into the motion.
Things like hesitation, non-capability and naturalness,
and you know, the weight thing into timing.
Problem number three I'd say is customization.
Imagine you get an autonomous car.
Not every single person will want that autonomous car
to drive in the same way, right?
You ideally would want all autonomous cars to be safe,
but the trade off that they're making, right,
between how efficient they are, how fast they're driving,
how much they're breaking the rules of traffic,
how close they're willing to get to other cars,
you want these trade offs to be
kind of determined by the person.
It's really up to them.
[Matt] What's interesting here is that the autonomous cars
of the future won't be one vanilla personality.
They'll be customizable.
We've been working on ways for the robot
to elicit your preferences without actually asking you
for demonstrations of how it should behave,
so while we're exploring for instance
these comparison based queries
where the robot sits you down, maybe in a simulator,
and shows you kind of a driving scenario and says,
Hey, I could, in this situation,
I could do this or I could do this other thing.
Which one would you want?
[Matt] The issue then becomes striking a balance
between aggressive and defensive.
I mean, you have to give the people the cars they want,
but the whole point here is safety,
so human robot interaction will be pivotal
going forward in the robo-car revolution.
[Anca] Problem number four is really about
making robots better at what they do.
So say you tell a Rumba that it'll get extra
brownie points if it picks up as much dust as possible.
The optimal policy ends up being
is for the robot to collect some dust,
then not just go on and collect more,
but to just eject its current dust,
and then suck it back in again
and do this on a loop, right?
It collects a lot of brownie points that way.
Robots tend to take things literally.
This arm, for instance, follows a strict set of commands,
but as we interact more and more with robots,
perhaps they shouldn't be so rigid in their objectives,
especially if there are humans along to guide them.
We found that by making robots
a little bit more humble in a sense,
a little bit more uncertain about
what they should really be doing
and looking to the person for guidance about that,
then all of a sudden, human oversight
and human intervention is incredibly valuable.
It's something that you derive a lot of information from.
So there you have it, four fascinating problems
in human robot interactions
that roboticists will no doubt conquer,
and I will cheers to that.
Has The U.S. Become A Surveillance State?
Sydney Sweeney Answers The Web's Most Searched Questions
Economics Professor Answers Great Depression Questions
Palantir CEO Alex Karp On Government Contracts, Immigration, and the Future of Work
Historian Answers Native American Questions
Cryogenics, AI Avatars, and The Future of Dying
EJAE on KPop Demon Hunters and Her Journey to Success
Why Conspiracy Theories Took Hold When Charlie Kirk Died
Historian Answers Folklore Questions
Language Expert Answers English Questions