|
Contribution to Guido Hermann and Ute Leonards (eds.). Humanoid-Human Interaction. In Amarish Goswami and Prahlad Vadakkepat (eds.). Humanoid Robotics: A Reference. Springer: Dordrecht 2019, 2421-2435.
Original article available at
https://link.springer.com/referenceworkentry/10.1007%2F978-94-007-6046-2_127
Abstract The aim of this chapter is to analyze
ethical issues of humanoid-human interaction in contrast to human-human
interplay. In the introduction, the question about the nature of the
artificial
based on the research by Massimo Negrotti lays the ground for the
difference
between humanoids and humans. The first part deals with a historical
overview
of the concept of human intelligence and humanoids in the context of
art and
labour. Looking at humanoids in the context of art does not make it
unreasonable
to predict that humanoids will lose their aura created in movies and
novels and
become part of everyday life in the 21st century. Humanoids in the
context of
labour takes us to seminal texts of Western thought, particularly from
Aristotle's analysis of the nature of slaves and servants as being
replaceable
by non-living instruments. A quote commented by Karl Marx in the
context of his
Political Economy becomes an important topic in the present debate
about labour
and robotics in the digital era. The second part of this chapter deals
with
ethical issues of humanoid-human interaction as distinct from the
interplay
between human beings. The ethical task concerning humanoid-human
interaction is
to raise the awareness of this ethical difference learning to see and
evaluate
how far and in which contexts and situations the algorithms guiding the
actions
of humanoids can make sense or not for human agents in general and more
vulnerable patients in particular. This task is explained through
examples derived
from the field of health care. Descriptive and normative ethical issues
of
robotics and humanoids are embedded in cultural traditions of which an
example
is given with regard to the ongoing discussions on robo-ethics in Keywords: humanoid-human interaction,
human-human interplay, intercultural robo-ethics, contextual integrity,
labour,
slaves, intelligence. Introduction
What is a humanoid? The suffix -oid
means 'similar' or 'resembling.' The etymological root of Greek -oeides
means
'having the form of' (eidos), 'like that of, thing like a__'.
The Latin suffix
-id means 'offspring of'. 'Descendant of' is used, for instance,
in
taxonomic names in biology (1). A humanoid is thus an artificial entity
that
looks like a human being as distinct from hominids, our natural
predecessors. Do
humanoids resemble more a man or a woman? This issue was discussed in
the EU
Project ETICA leading to the following conclusion: "The Literature on Robotics is dominated by the
term
"android", i.e. the male version of humanoids and it rarely relates
to "gynoid", the female version of robots. Also, there is much
controversy about producing "gynoids" for sexual purposes, promoted
on a market as a kind of "sex-aid". Similarly to other ICT, the field
of Robotics is assumed to be the one in which gender aspects may be
regarded [as]
unimportant. Robotics designers and producers believe that technology
is gender
neutral. In this way, they neglect the fact that a large segment of
consumers
of these products are females who may have different expectations and
needs
when it comes to the use of Robotics." (2, p. 86) According to
Massimo Negrotti, director of the Laboratory for the Culture of the
Artificial
at the University of Urbino (Italy), artificial technology is a
relation
between an "exemplar" and its "representation" conceived as
a process of selection based on an "observation level" of some
"essential performance" (3 pp. 46-47, 4, 5). This makes a difference
to conventional technology that generates machines "by combining and
recombining components and sub-systems whose raison d'etre
[sic] and
consistency depends only on the design itself and not on the structure
of the
world" (3, p. 34). For conventional technology, nature is something to
be
dominated while artificial technology aims at reproducing a natural
exemplar. Negrotti
writes: “The artificial, in conclusion, is conceived and
present in the first
phases of its existence as a naturoid, an object achieved by
man and
oriented to some natural exemplar as it is seen at a given observation
level.
However, it soon becomes, or reveals itself to be a technoid,
that is to
say, it becomes an object that exhibits characteristics which exceed
those of
the exemplar and either strengthens, reduces or somehow transfigures
some of
these, as if it had to redraw the exemplar not as it is but as it
should be.” (3,
p. 47) What is a
human being? According to a traditional definition, a human being is a
rational
animal (animal rationale). But what is human rationality or, to
consider
the term commonly used in computer science, human intelligence? What
kind of
rationality or intelligence as a form or eidos of human beings
'in-form'
artificial beings? (6). And what other ‘essential performances’ of
humans can
be selected when creating humanoids? A tentative answer to the first
question
would take us on a long journey through philosophy, psychology, history
of
logic and the analysis of scientific rationality, to mention just a few
fields.
Intelligence is a question for intelligence itself. This is at first
sight
disappointing. Looking for unquestionably true answers is begging the
question.
It presupposes the answer of what it is asking for, namely that
intelligence is
about unquestionable answers. Last century's philosophical debate on
language,
emotions and rationality ― Ludwig Wittgenstein (1889-1951) being one
prominent
example of this debate ― has shown the limits of this concept of
intelligence
whose origin goes back in the Western tradition to Greek philosophy,
through the
rise of empirical science up to the debate on artificial intelligence,
the
imagining of humanoids in art and literature and "how we became
posthuman" (7). Nothing is probably more controversial than agreeing on
what is specific to human beings, not only with regard to other living
beings but
also to artificial entities among which humanoids are a prominent case.
We must
keep this long-standing debate in mind when dealing with humanoids
resembling
the human body and performing some kind of human-like intelligent
actions and
emotions, giving rise to ethical issues regarding humanoid-human
interaction. We
start with a short historical review on human intelligence and
humanoids and look
at some current ethical issues concerning humanoid-human interaction in
the
second part. 1. Human
Intelligence and Humanoids
Human intelligence or knowing
has knowledge or the known as its correlate - noesis and noema in Husserl's
(1859-1938) terminology (8; 9). The origin of this correlation goes
back in the
Western tradition to Greek philosophy. Aristotle (384-322 BC)
distinguishes
between two kinds of knowledge (dianoia): dealing with truth and
opinion,
respectively. Knowledge dealing with truth can be technical knowledge (techne
poietike), scientific knowledge (episteme), practical
knowledge (phronesis),
knowledge of the first principles (sophia), and intellectual
reasoning (nous),
while conjectural knowledge (hypolepsei) and opinion (doxa)
deal
also with the wrong (10, 1139 b 15-18). This typology can be roughly
paralleled
with the customary distinction between know-how, know-why and know-what
(11). An
important difference between Aristotle's typology and our prevailing
view of
truth-seeking knowledge concerns the conjectural or fallibilistic
nature of
science as analyzed, for instance, by Karl Popper (1902-1994) (12).
Conjectural
knowledge undermines the theoretical ambitions to reach unquestionably
true
knowledge. Furthermore, knowledge is embedded in human action.
Aristotle emphasizes
that knowledge does not by itself move anything, neither with regard to
human virtue-oriented
action (eupraxia) nor to the production of material things (poiesis),
unless there is a striving combined with reasoning or a weighing-up of
various
possibilities (10, 1139 a-b). Weighing up possibilities of action
through
deliberating about the best means for achieving good life (eu zen)
is based
on customary rules and individual character (ethos) that
implicitly or
explicitly pervade social life. Human self-understanding in ancient
Greek
society, as represented in tragedy and reflected in philosophy, has as
its core
the dependence of human knowledge and action on divine actors and
necessity. It
changes throughout history and in different cultures. Note that to
follow
readily and spontaneously (hekon) a rule, in the case of classic
Greek
civilization, differs from what many centuries later became the will of
the
autonomous subject (13, pp. 41-74). In Modernity,
the human subject was conceived as separated from the world and the
other human
beings outside the consciousness of the individual. According to
René Descartes
(1596-1650), we would have no way of finding the difference between an
animal
and an animal-like "self-moving machine" (automates); however,
this would not be the case for machines imitating humans as we have two
tests that
can be used in order to ensure that machines do not act by knowledge (par
connaissance). The first test deals with human communication.
Human-like
automata may be able to use words and even perform some "bodily
actions"
(actions corporelles), but they would be unable to
arrange words
properly in order to answer meaningfully what a human being
says, which
is something that even the most dull persons (hébétés)
can do. The
second test concerns the bodily organs, which could not be artificially
shaped
in such a way that they allowed to behave adequately, i.e. reasonably
in
every situation (14, pp. 164-165). The first argument was contested by
Alan Turing
(1912-1954) in his seminal 1950 article "Computing Machinery and
Intelligence"
(15). It is symptomatic that the Turing Test consists in guessing not
only about
human intelligence but also about the gender difference. This takes us
to the
second Cartesian argument that contradicts the current understanding of
human
intelligence as a social phenomenon as well as intrinsically related to
the
body. Indeed, intelligence is intrinsically related to emotions and
both are
grounded in the body. According to Michael Polanyi (1891-1964), this
bodily
relation generates "tacit knowing" which is the basis of objective or
explicit knowledge (16, 17). Phenomenology and hermeneutics have
stressed the
role of foreknowledge, the original belonging together of feelings and
understanding and how bodily knowledge guides our actions dealing with
tools in
familiar environments. The Cartesian dualism between mind (res
cogitans)
and body (res extensa) is taken over by builders of AI
(Artificial
Intelligence). Hubert Dreyfus (18, 19), Terry Winograd and Fernando
Flores (20),
Don Ihde (21, 22) and Peter-Paul Verbeek (23), to mention just a few,
have further
developed the paths of phenomenology and hermeneutics in relation to
technology.
Human intelligence as embodied intelligence is different from mimicking
human
intelligence in algorithms as well as human emotions in a human-like
robot: What
humanoids are, depends on the social and cultural context in which they
are
created.
Classical Greek and Roman art is an
example of the sculptural and pictorial representation of anthropomorphic
or human-like gods and goddesses and theomorphic or god-like
humans.
Although this is also the case in other cultures and epochs – including
zoomorphic
or animal-like representations of gods and goddesses, for instance, in
ancient
Egypt, pre-Columbian America or India – what makes this tradition
particularly
interesting is that originals representing human-like gods or
god-like
humans were reproduced in a "serial, iterative, [and] portable" way
that made them ubiquitous in the ancient Western world; although no
more than
2% of them survived until today (24, p. 68). It is not only the
external shape
of humans that is imitated (mimesis) by Greek art but, more
substantially, human action (praxis) is imitated in Greek drama.
The
history of robots is closely related to the history of puppets and
marionettes
influencing the creation of robots, for instance, in There is a
four hundred years old tradition of humanoid automata during the
Arabic-Islamic
Renaissance (800-1200), among them al-Jazari's (1136-1206) Elephant
Clock, the
Beaker Water Clock, Automation for Carousals with mechanical slave
girls
playing flute, harp, and tambourine and a helmsman steering the boat
with the
rudder. Many of the automata created by al-Jazari were built according
to Ayhan
Ayteş: "to the glory of God and to the honour of
Allah's powerful
representative on Earth. Many of his devices, however, had profane uses
that
competed with divine omnipotence. A superficial function of many was
that the
hydraulic automata served the purpose of making the guests at
festivities drunk
as quickly as possible." (26, p. 112). This was
long before Jacques de Vaucanson (1709-1782) created “The Flute
Player”, Pierre
Jacquet-Droz (1721-1790) “The Writer and The Musician” and Wolfgang von
Kempelen (1734-1804) “The Turk” and the speaking machine. In the 19th
century, the French manufacturer, Ferdinand Barbedienne (1810-1892),
invented a
machine to produce miniature bronze replicas of famous antique as well
as
modern statues, particularly by Auguste Rodin (1840-1917), making them
accessible
to the bourgeoisie. With photography and other reproduction
technologies, works
of art lost their "aura" as Walter Benjamin remarked (27); although
this
was already the case with serial reproduction in Antiquity. The theomorphic
transfiguration of human intelligence based on computer technology has
been a
subject of art, science-fiction literature and movies as well as of
scientific
projects throughout the 20th century, including the present idea of singularity,
i.e., the emergence of a technological super-intelligence having
predecessors
in philosophy, myth and, religion (28, 29). It is not unreasonable to
predict
that humanoids will lose their aura created in movies and novels and
become
part of everyday life in the 21st century as happened, for instance,
with radio,
telephones, cars, computers and other advanced technologies. Norbert
Wiener
gives the following brief account of the history of automata: "At every stage of technique since Daedalus or
Hero of Alexandria,
the ability of the artificer to produce a working simulacrum of a
living
organism has always intrigued people. This desire to produce and to
study
automata has always been expressed in terms of the living technique of
the age.
In the days of magic, we have the bizarre and sinister concept of
the
Golem, that figure of clay into which the Rabbi of Prague breathed life
with
the blasphemy of the Ineffable Name of God. In the time of 1.2
Humanoids in the Context of Labour
What are servants and slaves other
than organoids or tool-like humans, excluded from some
"essential
performances" that belong apparently only to a special group of
society? In
a foundational text of Western thought on the relationship between
masters and slaves
Aristotle writes: "Property is a part of the household (oikias),
and the art
of acquiring property (ktetike) is a part of the art of
managing the household; for no man can live well (eu zen), or
indeed live at all, unless he be provided with necessaries. As
in the arts which have a definite sphere and must have
their own proper instruments for the accomplishment of the
work, this is also the case of the household. Some instruments
(organon) are lifeless (apsycha), others
living (empsycha); in the rudder, the pilot (kybernete)
of a ship
has a lifeless, in the look-out man, a living instrument (for
in the arts the servant (hyperetes) is in the rank of
instrument). Thus, too, the property of such an instrument is for
maintaining life as well as the totality of them; a slave (doulos)
is a living possession, and like an instrument
which takes the place of all other instruments. For if every instrument
could accomplish its own work, following a command or
anticipating it, like the statues of Daedalus, or the tripods of
Hephaestus, which, says the poet, of their own accord
(automatous) entered the assembly of the Gods; if, in like
manner, the shuttle would weave and the plectrum touch
the lyre without a hand to guide them, the master builder
(architektosin) would not need servants (hypereton), nor
masters (despotais) slaves (doulon)." (31, 1253
b 25-39, revised English translation, RC) What is particularly
remarkable in this seminal text from Aristotle's Politics is
the last
sentence in which he considers the possibility of a non-slavery based
society
in case "lifeless" instruments could accomplish the same kind of work
done either by a servant or by slave. After thorough argumentation
Aristotle
comes to the conclusion that because there is some evidence for the factual
distinction between free men and slaves, this distinction is not always
based
on nature (31, 1255 b 5). The idea of a society free of masters
and (born
or factual) slaves is, indeed, Utopian. Being rooted in a mythical
context, it
is somehow ironical too. Nevertheless it is a provocative idea. In the Rhetoric
Aristotle quotes the Sophist Alcidamas: “God has left all men free;
Nature has
made none a slave.” (32, 1383 b 17-18). What, indeed, makes a basic
difference
with regard to the production and use of technical objects in Antiquity
is the
meaning of labour and commercial competitive behaviour since Modernity
as what
binds society (33, p. 319; 34). But a society based only on market
economy is
to the detriment of friendship and use-value as conceived by Aristotle.
The impact
of Aristotle's thinking as well as of other great thinkers in Antiquity
and
Modernity on the non-natural status of slavery was weak if we consider
that, for
thousands of years, human societies have continued to be slavery-based
(35). For
the purpose of this analysis, we keep in mind that servants and slaves
― Aristotle
points with some examples to the difference between both ― are
understood as instruments
for special tasks. There are different kinds of relationships between
masters
and servants/slaves, humans and animals, and humans and lifeless
artefacts according
to different contexts. According to Aristotle, the best one is when the
work is
done by humans because masters and servants/slaves share "psychic"
qualities lacking in non-human instruments. What is proper for a slave,
as
distinct from a servant, is to be the property of a master, doing with
the body
the things the master foresees with the mind. Ownership means, in the
case of
slaves, that no part remains outside the master's control (31, 1254 a
10-13). The
ownership of productive "lifeless" or "living" instruments
allows the master to use what the instruments produce, as well as to
purchase
and exchange the products, using money for the sake of profit. To
strive
towards an unlimited accumulation of money and things owned through
money (techne
chremastike) means that a human agent is committed to life but not
to good
life (31, 1257 b 34-42). A manager can delegate these tasks in order to
dedicate himself to politics and philosophy, i.e. to another form
of intellectual practice (31,
1255 b 35-38). A human being (anthropos) is by nature a
"political
animal" (politikon zoon) and the only living being endowed with
speech (logos), allowing him ― Ancient Greece was a
male- (aner)
dominated society ― to reason politically about good and evil, the just
and the
unjust (31, 1253 a 2-18). Both "essential performances" of human
beings were excluded in the social and economic context of Ancient
Greece from
the being of servants and slaves (36).
Slavery was de iure abolished
in the 19th century, but different forms of labour appeared in
industrial
society with the use of machines based on steam-power and electricity
that led
to mass production and new forms of the division of labour under
slave-like
conditions. This was criticized most prominently by Karl Marx
(1818-1883). In
his opus magnum Das Kapital he quotes the seminal text by
Aristotle and
calls him "the greatest thinker of antiquity" (37, p. 278). Marx
argues that neither Aristotle nor other thinkers could comprehend that
machinery
produces "the economic paradox, that the most powerful instrument for
shortening labour-time, becomes the most unfailing means for placing
every
moment of the labourer’s time and that of his family, at the disposal
of the
capitalist for the purpose of expanding the value of his capital." (37,
p.
278). But as Hannah Arendt remarks: "A hundred years after Marx we know
the fallacy of this reasoning; the spare time of the animal laborans
is
never spent in anything but consumption, and the more time left to him,
the
greedier and more craving his appetites" (38, p. 133). Norbert Wiener
echoes both concerns when he writes: "Let us remember that the automatic machine,
whatever we think of
any feelings it may have or may not have, is the precise economic
equivalent of
slave labor. Any labor which competes with slave labor must accept the
economic
conditions of slave labor. It is completely clear that this will
produce an
unemployment situation, in comparison with which the present recession
and even
the depression of the thirties will seem a pleasant joke." (39, p. 162) A reflection
on the impact of digital technology on human labour from the
perspective
of Political Economy serves as a
corrective
to a one-sided market-oriented capitalistic view of technology that
fails to
address issues of justice as fairness faced by societies in the digital
age
(40, 41). 2. Ethical
Issues of Humanoid-Human Interaction
What is interaction? According to
Hannah Arendt, we are a working animal (animal laborans), a
producer of
things (homo faber), and a being capable of action and speech.
She
writes: “Human plurality, the basic condition of both
action and speech, has the
twofold character of equality and distinction. If men were not equal,
they
could neither understand each other and those who came before them nor
plan for
the future and foresee the needs of those who will come after them. If
men were
not distinct, each human being distinguished from any other who is,
was, or
will ever be, they would need neither speech nor action to make
themselves
understood” (38, pp. 175-176). We disclose
ourselves through action and speech. "This disclosure of 'who' in
contradistinction to 'what' somebody is ― his qualities, gifts,
talents, and
shortcomings, which he may display or hide ― is implicit in everything
somebody
says and does" (38, p. 179). The difference between who we are, our
'whoness,'
and what we are, is the ethical difference (42). The Australian
philosopher,
Michael Eldred, coined concepts of interaction, interplay and whoness.
He
writes with regard to Arendt: “The realm or dimension she is addressing, of
"people... acting and
speaking together" (38, p.198) through which they show to each other
who
they are and perhaps come to "full appearance [in] the shining
brightness
we once called glory" (38, p. 180), is not that of action and reaction,
no
matter (to employ Arendt's own words) how surprising, unexpected,
unpredictable, boundless social interaction may be, but of interplay.
it
is the play that has to be understood, not the action, and it is no
accident
that play is also that which takes place on a stage, for she
understands the
dimension of "acting and speaking" (38, p. 199), revealing and
disclosing
their selves as who they are. On the other hand, interplay
takes place
also in private: in the interplay of love as a groundlessly grounding
way to be
who with another, where speaking easily becomes hollow” (43, p. 83) We build our
character (ethos) through social life. The difference between
making
things (poiesis) and building what Arendt calls "the "web"
of human relationships" (38, p. 183) through human action (praxis)
goes
back to Aristotle. Human beings conceal and reveal who they are in a
shared
world. The task of an ethics of the humanoid-human interaction is,
firstly,
learning to see the difference between the human-human interplay
and
humanoid-human interaction. Human-human interplay is about
mutual
acknowledging or disacknowledging who we are individually and socially
on the
basis of shared customs, rules, values and practices. Human interplay
is risky
because human agents face the contingencies and openness of their past,
present
and future actions and interpretations and above all the risks of
ongoing power
play with others. A selection of human performances and moral rules can
be
codified in algorithms building the basis for the humanoid-human
interaction.
An ethics of humanoid-human interaction is, secondly, learning to see
how far
and in which contexts such an algorithmic selection of human interplay
can make
sense for the purpose at issue. This second task can be dealt with
using Helen
Nissenbaum's concept of privacy as contextual integrity. She writes: “Contexts are structured social settings
characterized by canonical
activities, roles, relationships, power structures, norms (or rules),
and
internal values (goals, ends, purposes). Contexts are "essentially
rooted
in specific times and places" that reflect the norms and values of a
given
society” (44, pp. 132-133) According to
Nissenbaum, privacy is not a property of a particular kind of
information, but
a second-order attribute ascribed to information in a specific context.
Furthermore, human interplay is constituted by situations that
constantly evolve and dissolve based on the free power plays of the
agents. It
is based on trust, not on subordination. Trust relies on implicit or
explicit
norms that can be selected for the humanoid-human interaction, taking
into
account not only the evolution of such norms and customs but also the
difference between implicit and explicit knowledge, the original
belonging-together
of feelings and understanding, and how bodily knowledge guides our
behaviour in
familiar environments (see also 45 for a review of the importance of
trust from
a safety point of view). According to Michael Nagenborg, there is a
difference
between a program and an agent. He writes: “One major difference between a „program“ and an
„agent“ is, that
programs are designed as tools to be used by human beings, while
„agents“ are
designed to interact as partners with human beings. […] An AMA
[artificial
moral agent, RC] is an AA [artificial agent, RC] guided by norms which
we as
human beings consider to have a moral content. […] Agents may be guided
by a
set of moral norms, which the agent itself may not change, or they are
capable
of creating and modifying rules by themselves. […] Thus, there must be
questioning about what kind of „morality“ will be fostered by AMAs,
especially
since now norms and values are to be embedded consciously into the
„ethical
subroutines“. Will they be guided by „universal values“, or will they
be guided
by specific Western or African concepts?” (46, pp. 2-3) But, in
fact, artificial agents need a battery and an algorithm in order to
move by
themselves as agents. Automata as defined by Aristotle are no
less tools
(organa) than assistants and slaves that they could replace. An
artificial moral agent, a 'what-agent,' is, in turn, not the same as a
'who-agent'
capable of criticizing her own rules of behaviour as well as codes of
morality guiding
humanoid actions. There is a difference between morality as a
set of
customs and norms underlying implicitly or explicitly human interplay
and their
critical ethical reflection that allows human agents to
question moral
and legal rules. Consequently, the notion of autonomy with regard to
humanoid
and human agents must be differentiated;. Autonomy with regard to
humanoids refers
to their capacity to do what their authors/originators wants them to do
in a
particular context and in view of a given set of standard
situations.
Human autonomy means the capacity to look beyond such contexts and to
be open
to unexpected situations giving rise to a process of deliberating for
self-care
that means always caring for others. What moves humans is the awareness
of the
groundless interplay of their lives and the care for a good life. The
human
self is not an isolated and worldless subject as imagined in Modernity
but a
plurality sharing a common world. The same can be said with regard to
other
notions ascribed to humanoids such as 'learning' or 'making a decision'
that
the author/originator of the humanoid selects from the human
"exemplar" deprived of the historical, political, societal, bodily
and existential dimensions (47, see also 48). In short, humanoid-human
interaction is asymmetrical in view of human-human interplay.
The
ethical task of designers and users of humanoids is to make this
asymmetry as
transparent as possible for the human agents so that they can be aware
of its
risks and opportunities. In the field of healthcare robots, Aimee van
Wynsberghe
puts this asymmetry thus: “[...] the latest developments in healthcare
robotics are those intended
to assist the nurse in his/her daily tasks. These robots, now referred
to as
care robots, may be used by the care-giver and/or the care-receiver for
a
variety of tasks from bathing, lifting, feeding, to playing games. They
may
have any range of robot capabilities and features and may be used in a
variety
of contexts, i.e. nursing home, hospital or home setting. They are
integrated into the therapeutic relationship between care-giver and
care-receiver with the aim to meet a range of care needs of users.”
(49, p.
2, my emphasis) A key ethical
issue of humanoid-human interaction is the contextual delimitation of
the
purpose and the field of action within which a humanoid or several of
them are
supposed to be assistants in interaction with humans, in accordance
with some
human "essential performances" selected for a specific goal. The more
explicit such contextual delimitation is, the more reliance can be
built up concerning
the contextual integrity of what can and should be done and what not.
Within
such contexts, not only breakdowns in physical performance must be
taken into
account, but also misunderstandings and frustration in humanoid-human
communication. Noel Sharkey writes: “What would happen if a parent were to leave a
child in the safe hands
of a future robot caregiver almost exclusively? The truth is that we do
not
know what the effects of the long-term exposure of infants would be. We
cannot
conduct controlled experiments on children to find out the consequences
of
long-term bonding with a robot, but we can get some indication from
early
psychological work on maternal deprivation and attachment. Studies of
early
development in monkeys have shown that severe social dysfunction occurs
in
infant animals allowed to develop attachments only to inanimate
surrogates.” (-50,
para. 6) These issues
were analyzed in the mid-1980s by Terry Winograd and Fernando Flores
with
regard to software systems embedded in human conversations. "In
designing
tools," they wrote, "we are designing ways of being" (20, p. xi).
This is particularly true when it comes to the design of humanoid
assistants
for various tasks embedded in different cultures. There is a variety of
ethical
and legal values and principles that need interpretation in view of
situations
in which they are supposed to govern and/or assist interaction with
humans,
with a lot of possibilities between governing and assisting. The
selection of
human actions may either conform with or contradict customs and legal
rights of
the users in a particular context and cultural tradition (see also e.g.
51, 52). Care-giver
humanoid robots cannot only mimic human intelligence and human
communication but
also human emotions (see also 53), an issue that has been object of
research
with regard to robots and avatars in several research projects for
fifteen
years. Barbara Becker (1955-2009) wrote ten years ago: “The personalization of robots and avatars,
which is in common
particularly observed among children and is obviously reinforced by
even
primitive expressions of feelings (see Cassell 2000, Breazeal 2002, and
also Dautenhahn
2004), is highly ambivalent. On the one hand the personalization of
avatars can
frequently lead to more rapid initiation of communication and reinforce
the
feeling of social involvement (Schröder et al. 2002). On the other
hand,
however, "emotionalized" robots or avatars suggest the existence of
an emotional context on the part of the virtual agent or robot, which
is a pure
fiction. This brings us back to the issue of social practice (Gamm
2005): How
do people, e.g. children, deal with the artificial agents that give
them the
impression of having feelings similar to those they have themselves?
The
personalization frequently observed when dealing with agents is
unproblematic as
long as the users can maintain a reflective distance from such
attributions.
Nor does this present a serious conflict for children, as long as the
artificial agents have the status of cuddly toys or dolls, to which a
form of
personality has always been ascribed. Nevertheless, children usually
had a
clear perception of the artificiality of these toys. The future will
show
whether this changes when the artefacts show expressive reactions or
can move.”
(54, p. 42 emphasis added) This issue
today becomes particularly problematic, on the one hand, in case of
children
interacting with online toys, exposed to a deregulated digital
environment if
they disclose who they are to third parties. Online toys become
invasive tools
of children's privacy. On the other hand, humanoids can be useful as
teachers'
assistants or for autism treatment. Conclusion & Future Directions
The task of
the ethics of humanoid-human interaction is to reflect on the
possibilities in-between
these two poles in order to give human agents their freedom back as far
as
possible. Humanoid-human interaction selects some essential actions of
the human-human
care, switching from the pole in which such care tends to take the
place of the
other, to the pole that opens up a path for the other to take care of
him- or herself.
This raises the question as to how flexible programming algorithms are
allowed
to be. This question can only be answered with regard to specific
contexts and
foreseeable standard situations of danger for human agents. Emerging
behaviours
from adaptable algorithms should be monitored by the persons
responsible for
the contextual integrity the humanoids are supposed to be subjected to,
setting
a limit of their autonomy. Such monitoring of deteriorating or
ameliorating
possibilities and realities of humanoid-human interaction should take
place
regularly, particularly when human agents are physically,
intellectually and/or
volitionally weak. In these cases, humanoid-human interaction should be
complemented or even supplemented in various degrees by human-human
interplay. Finding
the right balance of this intersection needs patient attention to and
evaluation of changing human needs. In special cases, such as in health
care, a
technical and ethical monitoring could be even legally mandatory. Should a
robot be invariable or programmed to follow the cultural and ethical
expectations of a specific society? In 2006, the International
Review of
Information Ethics dedicated a special issue to Ethics in Robotics.
The
editors wrote: “Our main values are embedded into all our
technological devices.
Therefore, the question is: which values are we trying to realize
through them?
Artificial creatures are a mirror of shared cultural values. Humans
redefine
themselves in comparison with robots. This redefinition of living
organisms in
technological terms has far-reaching implications. Long-term
reflections need
to be developed and plausible scenarios need to be anticipated." (55,
p.
1). There is
plenty of theoretical and empirical research to be done on
Intercultural
Roboethics. Cultures are not closed entities, but complex and dynamic
systems. Ethics
likewise needs an intercultural perspective. Naho
Kitano writes: “'Rinri', the Japanese Ethics. When discussing
the ethics of using a
robot, I have been using the term ―"Roboethic" generally in my
research, but it is used in very particular ways especially at
international
conferences. The word for ―"Ethics" in Japanese is Rinri.
However, the Japanese concept of ethics differs from the Western
concept of
ethics, and this can lead to misunderstandings. In In her book Robotica
Nipponica Cosima Wagner writes: “"social" robots illustrate the "negotiation
character of
the creation and use of technological artefacts" (Hörning), which,
for
example includes the rejection of military applications of robot
technology in These examples
show the complexity and relevance of research into ethical issues
regarding
humanoid-human interaction. This research should not deal only
descriptively
with differences between humanoid-human interaction and human-human
interplay
in different cultures, but also normatively with issues implicit or
explicit in
both cases, including their intersection. Ethics of humanoid-human
interaction is
an exciting field of research for which this contribution aims at
rising awareness
for taking more steps based on an ongoing intercultural robo-ethics
dialogue
between 'East' and 'West' (58). Acknowledgements I thank Michael Eldred ( References 1. Dictionary.com Unabridged, oid.
(n.d.) 2. M. Rader, (ed.), A. Antener, R.
Capurro, M. Nagenborg, L. Stengel, W. Oleksy, E. Just, Edyta; K.
Zapędowska, 3. M. Negrotti, The Theory of
the Artificial (Cromwell Press, Wiltshire, 1999). 4. M. Negrotti, Naturoids. On the
Nature of the Artificial (World Scientific 5. M. Negrotti, The Reality of the
Artificial. Nature, Technology and Naturoids (Springer, 6. R. Capurro, Past, present and
future of the information concept. tripleC, 7/2 (2009), 125-141, 7. K. H.
Hayles, How We Became Posthuman. Virtual Bodies in Cybernetics,
Literature and
Informatics (The University of Chicago Press, Chicago, 1999). 9. A. D.
Spear, Edmund Husserl: Intentionality and Intentional Content. The
Internet
Encyclopedia of Philosophy, ISSN 2161-0002, 2016. 10. Aristotle, Ethica
Nichomachea (Oxford University Press, London, 1962). 11. R. Capurro, Skeptical Knowledge
Management, in Knowledge Management. Libraries and Librarians Taking Up
the
Challenge ed. by H.-C. Hobohm (Saur, 12. K.R. Popper, Conjectures and
Refutations. The Growth of Scientific Knowledge (Routledlge & Kegan
Paul, London,
1965). 13. J.P. Vernant, P. Vidal-Naquet, Mythe
et tragédie en Grèce ancienne (La Découverte,
Paris, 2001). 14. R. Descartes, Discours de la méthode,
in Oeuvres et lettres. ed. by A. Bridoux. (Gallimard, Paris, 1952). 15. A. M. Turing, Computing
Machinery and Intelligence, Mind 59, 433-460 (1950). 16. M. Polanyi, The Tacit Dimension
(Doubleday, New York, 1966). 17. F.
Adloff, K. Gerund, D. Kaldewey (eds.), Revealing Tacit Knowledge:
Embodiment
and Explication (transcript, 18. H.L. Dreyfus, Hubert L. (1992). What
Computers Still Can't Do: A Critique of Artificial Reason. (The MIT
Press,
Cambridge, 1992). 19. H.L.
Dreyfus,
What Computers Can’t Do: A Critique of Artificial Reason. (Harper
& Row,
New York, 1972). 20. T. Winograd, F. Flores, Understanding
Computers and Cognition: A New Foundation for Design. (Ablex, Norwood,
NJ, 1986). 21. D. Ihde, Bodies in Technology (The 22. D.
Ihde,
Expanding Hermeneutics: Visualism in Science. (Northwestern
University
Press, Evanston, Ill., 1998). 23. P.P. Verbeek, Moralizing
Technology: Understanding and Designing the Morality of Things. (The 24. S. Settis, Supremely Original.
Classical art as serial, iterative, portable, in Serial / Portable
Classic ed.
by S. Settis, A. Anguissola, D. Gasparotto, 51-72. (Fondazione Prada, 25. R. Capurro, Living with Online
Robots. (2015) www.capurro.de/onlinerobots.html. Accessed 26 Feb 2017 26. A. Ayteş,
Divine Water Clock: Reading al-Jazarī in
the Licht of al-Ghazālī's
Mechanistic
28. R. Capurro, On Artificiality,
ed. by Istituto Metodologico Economico Statistico (IMES) Laboratory for
the
Culture of the Artificial, Università di Urbino, IMES-LCA WP-15
(1995) 30. N.
Wiener, Cybernetics:
or Control and Communication in the Animal and the Machine (The MIT
Press, Cambridge,
Mass., 1965). 31. Aristotle, The Politics of
Aristotle (Oxford University Press, London, 1950). 32. Aristotle, Ars Rhetorica
(Oxford University Press, London, 1959). 33. J.P. Vernant, Mythe et pensée chez les
Grecs. Études de psychologie historique (La Découverte,
Paris, 1996) 34. J.P. Vernant, Travail et esclavage en
Grèce ancienne. (Complexe, Brussells, 1994). 35.
H. Joas, Sind die Menschenrechte westlich? (Kösel, Munich, 2015). 36. W.
Kullmann, Aristoteles und die moderne Wissenschaft (Franz
Steiner,
Stuttgart, 1998). 37.
K. Marx, Das Kapital. Kritik der politischen Ökonomie (Anaconda,
Cologne, 2009)
(Engl. Capital. A Critique of
Political Economy, Transl. 38. H. Arendt, The Human Condition (The
University of Chicago Press, Chicago, 1998). 39. N.
Wiener, The
Human Use of Human Beings. Cybernetics and Society. (Free Association
Books, London,
1989). 40. M. Schneider, A
Dialética do Gosto: Informação, música e
política. (Faperj/Circuito, Rio de
Janeiro, 2015). 41. C.
Fuchs, Digital labour and Karl Marx. ( 42. R.
Capurro,
M. Eldred, D. Nagel, Digital Whoness: Identity, Privacy and
Freedom
in the Cyberworld. ( 43. M. Eldred, Phenomenology of whoness: identity,
privacy, trust and freedom, in Digital Whoness: Identity, Privacy
and
Freedom in the Cyberworld ed. R. Capurro, M. Eldred, D. Nagel,
Daniel ( 44. H. Nissenbaum, Privacy in Context:
Technology, Policy, and the Integrity of Social Life. ( 45. D. Araiza-Illan, K. Eder, Safe
and trustworthy human-robot interaction, in Section: Human-Humanoid
Interaction, Humanoid Robotics: a Referernce (Springer, London). 46. M.
Nagenborg,
Artificial moral agents: an intercultural perspective. International
Review of
Information Ethics, 7, (2007). 47. R. Capurro, Toward a Comparative
Theory of Agents. AI & Society 27, 4 (2012), 479-488. 48. K.S. Lohan, H. Lehmann, C.
Dondrup, F. Broz, H. Kose, Enriching the human-robot interaction loop
with
natural, semantic and symbolic gestures, in Section: Human-Humanoid Interaction, Humanoid Robotics: a
Referernce (Springer, London). 49. A.v. Wynsberghe, Healthcare
Robots. Ethics, Design and Implementation. ( 50. M. Sharkey, The Ethical
Frontiers of Robotics, Science, Dec 19 (2008) . 51. B. Miller, D. Feil-Seifer, Embodiment, Situatedness and Morphology for Humanoid Interaction in Section: Human-Humanoid Interaction, Humanoid Robotics: a Referernce (Springer, London). 52. F. Ferland, R. Agrigoroaie, A.
Tapus, Assistive humanoid robots for the elderly with Mild Cognitive
Impairment, in Section:
Human-Humanoid Interaction, Humanoid Robotics: a Referernce (Springer,
London). 53. T. Nomura, Empathy as signaling feedback between humanoid robots and humans,in Section: Human-Humanoid Interaction, Humanoid Robotics: a Referernce (Springer, London). 54. B. Becker, Social robots –
emotional agents: Some remarks on naturalizing man-machine interaction.
International Review of Information Ethics, 6 (2006), 37-45. 55. R.
Capuro, Th. Hausmanninger, K. Weber, F. Weil, Ethics in
Robotics. International Review of
Information
Ethics, 6 (2006) 1. 56. N. Kitano, 'Rinri': An
Incitement towards the Existence of Robots in Japanese Society.
International
Review of Information Ethics, 6 (2006), 79-82.
58.
M.
Nakada, R. Capurro, An Intercultural Dialogue on Roboethics, in
The Quest
for Information Ethics and Roboethics in East and West. Research Report
on
trends in information ethics and roboethics in
http://www.capurro.de/intercultural_roboethics.html Accessed 27 Feb 2017. Last update: April 2, 2019 |
|
|
Homepage | Research | Activities |
Publications | Teaching | Interviews |