Semi-automated vehicles on our motorways, a mobile robot arm
that can effectively intervene in a disaster area, drones: these are all
examples of robots that exist in our unpredictable environment. These
robots cannot manage without human intervention, however: behind every
successful robot there’s a human being. A strong foundation for
operating robot arms and robot vehicles is haptics: our innate ability
to feel our body. The elegant way in which our body intuitively responds
to dynamic environments and uncertainties must be used to operate
(mobile) robot arms, but also to work with intelligent vehicles which –
just like their operators – are never entirely perfect. That’s the view
of David Abbink, professor of Haptic Human-Robot Interaction at the
Department of Cognitive Robotics, who held his inaugural lecture at Delft University of Technology on Friday 22 March.

Actually, the difficult relationship between fallible human beings and
fallible automation is a familiar problem, for example in commercial
aviation. In that sector, we select the best people, we train them and
we pay them; we have air traffic control, checklists and procedures. We
can’t say the same of motorists on our roads… What’s more, the paradigm
behind this is that human beings have to act as a backup system for the
machine, and that therefore it’s either the human or the machine that’s
doing the operating. But people aren’t suitable backup systems at all:
that’s tedious and so instead of paying attention we end up doing other
things. That’s especially the case when we’re not trained or paid to be
doing it.
Actually,
the difficult relationship between fallible human beings and fallible
automation is a familiar problem, for example in commercial aviation. In
that sector, we select the best people, we train them and we pay them;
we have air traffic control, checklists and procedures. We can’t say the
same of motorists on our roads… What’s more, the paradigm behind this
is that human beings have to act as a backup system for the machine, and
that therefore it’s either the human or the machine that’s doing the
operating. But people aren’t suitable backup systems at all: that’s
tedious and so instead of paying attention we end up doing other things.
That’s especially the case when we’re not trained or paid to be doing
it.
“So the real problem is: how do we ensure that humans understand what a robot wants, can and will do?”
A
common thought is: increase automation, then we won’t need the fallible
human being anymore. But we haven’t managed to do that yet, not even in
commercial aviation, never mind having robots interact with complex
human behaviour in open environments, such as cities, offices or homes.
And indeed, we’re not only talking about the issue of safety, but also
comfort and pleasure – or hopefully at least the absence of irritation.
So the real problem is: how do we ensure that humans understand what a
robot wants, can and will do? And how do we ensure that robots
understand what humans want and will do?
Humans and robots have to
engage in a reciprocal relationship that leads to mutual understanding
and cooperation, so that they can learn from and support each other. Not
either humans or robots, but humans and robots – together in a
symbiotic relationship. More than enough attention is being devoted to
the development of increased robot autonomy, but there is insufficient
attention for robot symbiosis, and that needs to change, according to
Abbink. In his inaugural lecture, he talked about some common examples
of the awkward way in which robots and humans currently work together,
and about the power of haptics and what is referred to as ‘shared
control’, which he developed with his colleagues: a way in which robots
and humans work together like a rider on a horse. He explained his
approach by way of several inventions from his lab,
examples of how he ultimately made human-robot interaction symbiotic.
It shows that the behaviour emerging from this interaction is better
than that of only the human or only the robot.
He also attempted to look into the future: how will symbiotic
interaction translate into the effective impact of robotics on society,
one of the more serious concerns of our time? ‘While preparing for my
inaugural lecture I read a great deal,’ David Abbink says, ‘and became
fascinated by the “Macy Conferences”. They were a series of
interdisciplinary gatherings between 1946 and 1953 the scale of which
was unprecedented: anthropologists, biologists, engineers
neuroscientists, psychiatrists and economists convened to think about
the dynamics of feedback in biological and social systems.
“It’s a shame that we’ve lost sight of this synergy: I never come across anthropologists or economists at conferences”
What emerged from that is systems theory and systemic thinking, which
focuses on context and the dynamics of relationships. This had a major
impact on the above-mentioned fields, which grew to become more
specialised. Unfortunately, they consequently became more fragmented as
well. It’s a shame that we’ve lost sight of this synergy: I never come
across anthropologists or economists at conferences. But as a scientist
how am I supposed to learn to think about the impact of my work in the
complex fluid social reality that we are helping to co-create? I believe
this needs to be done differently, because it’s only by trying together
to understand the complex interaction of robotics in biological,
psychological, social, economic and ecological systems that we can
effectively guide its development. We have to improve the way we teach
our students about this as well, and show how inspiring and important it
is to look beyond the confines of their specialisations.
‘That’s
why I’m going to promote the idea of setting up new “Macy Conferences”
for the 21st century. In fact, I did, during a symposium prior to my
inaugural lecture that will bring together engineers and social
scientists to talk about the future of human-robot interaction.’