|
Contribution to the AI, Ethics and Society Conference, University of Alberta, Edmonton (Canada), May 8-10, 2019. In: International Review of Information Ethics (IRIE) July, 2020.
Technical University (TU) Berlin: Ringvorlesung (PowerPoint), Marchstr. 23, Raum MAR 1.001, 16:00, Febr 4, 2020.
Institute of Philosophy and Technology: IPT Talk Series 2022-2023. Athens, October 17, 2022.
Video
Abstract The paper aims at presenting
some issues that arose when dealing with societal
and ethical implications of AI since the seventies
in which the author was involved. It is a
narrative on how the understanding of AI dealt firstly
with the question whether machines
can think. With the rise of the internet in the
nineties the perception of AI turned, secondly,
into an issue of what AI as distributed
intelligence means with an impact at all levels of
social life no less that at basic ethical issues
of daily life. In a breath-taking use of AI for
all kinds of societal goals and contexts, the
awareness grew, thirdly, that all natural
and artificial things might be digitally connected
with each other and to human agents. In the conclusion
some challenges relating to the development and
use of artificial intelligences are mentioned as
well as results of recent research done in
academia, scientific associations and political
bodies concerning the possibilities for good life
with and without artificial intelligences. Introduction What
does Artificial Intelligence (AI) mean in a broad
historical perspective? This is a question that
has not only sociological implications but
addresses the basic understanding of technology as
not being purely instrumental but shaping the
relation between man and world. AI is the spirit
of our time that conditions but does not determine
knowledge and action. The answer to this question
is a long and complex analysis going back to the
roots not only of Western philosophy but also to
other philosophical traditions. My aim in this
paper is to recall some facts and discuss some
arguments related to societal and ethical
implications of AI particularly since the
seventies in which I was involved. My
narrative about AI deals firstly with the
question originated from cybernetics ―whether a
machine can think― and I do this with reference to
authors such as Alan Turing, Norbert Wiener,
Joseph Weizenbaum, and Hubert Dreyfus. With the
rise of the Internet in the nineties the
perception of AI turned, secondly, into an
issue of what AI as distributed intelligence means
with an impact at all levels of social life. This
broad societal challenge was called cyberspace; it
was commonly perceived as a kind of separate
sphere from the real world. This dualism soon
became untenable. In a breath-taking development
of digital technology for all kinds of societal
goals and contexts, the awareness grew, thirdly,
that all natural and artificial things might be
digitally connected with each other as well as to
human agents into what is being called the
Internet of Things. The interpretation of AI
changed from the original question whether
machines can think into the one of what natural
and artificial things are when they become
intelligent. I. Can machines think?
At every stage of technique
since Daedalus or Hero of Alexandria, the ability
of the artificer to produce a working simulacrum
of a living organism has always intrigued people.
This desire to produce and to study automata
has always been expressed in terms of the living
technique of the age. In the days of magic, we have
the bizarre and sinister concept of the Golem,
that figure of clay into which the Rabbi of Prague
breathed life with the blasphemy of the Ineffable
Name of God. In the time of Newton, the automaton
becomes the clockwork music box, with the little
effigies pirouetting stiffly on top. In the
nineteenth century, the automaton is a
glorified heat engine, burning some
combustible fuel instead of the glycogen of the
human muscles. Finally,
the present automaton opens doors by means of
photocells, or points guns to the place at which a
radar beam picks up an airplane, or computes the
solution of a differential equation. (Wiener 1965,
39-40) We
can enlarge this history with regard to literature
(Karol Capek: R.U.R. Rossum's Universal Robots
1921, Stanislaw Lem: Golem XIV, 1981; Isaac
Asimov's Three Laws of robotics became famous
through the collection I, Robot, 1950) and
particularly to film (Fritz Lang: Metropolis 1927;
Stanley Kubrick: 2001: A Space Odyssey, 1968; Aaron Liptsadt: Android,
1982; Ridley Scott: Blade Runner 1982; Gene
Roddenberry: Star Trek 1987-1994,
Albert Pyn: Cyborg, 1989; Steven Spielberg:
A.I. Artificial Intelligence 2001; Alex Proyas: I.
Robot 2004). The term artificial intelligence was
first used in a scientific context in a workshop
at The
word cybernetics is of Greek origin. A cybernetes
is the pilot of a ship facing the insecurity of
starting or not a travel in view of the weather,
the sea, the robustness of the ship, the support
of the crew. The Greeks called metis savvy
intelligence useful for any kind of risky
endeavours. Metis has to do with skills,
prudence, wisdom, cunning, and trickery (Detienne
and Vernant 1974). In a foundational text of
Western thought, Aristotle writes in his Politics
that in order to live well (eu zen)
lifeless (apsycha) and living (empsycha)
instruments (organon) are needed for the
administration of the household (oikia).
The rudder is such a lifeless instrument for the
pilot of a ship, while a look-out man is a living
one. Similarly, a slave (doulos) is a
living possession which takes the place of all
other instruments. He writes: If every instrument
could accomplish its own work, following a command
or anticipating it, like the statues of Daedalus,
or the tripods of Hephaestus, which, says the
poet, of their own accord (automatous)
entered the assembly of the Gods; if, in like
manner, the shuttle would weave and the plectrum
touch the lyre without a hand to guide them,
the master builder (architektosin) would
not need servants (hypereton), nor
masters (despotais) slaves (doulon).
(Aristotle, Politics, 1253 b 25-39, revised
English translation, RC). Aristotle
addresses
ironically a mythical society as a utopia where
work is not based on the use
of slaves but on lifeless intelligent automata.
Karl Marx quotes this text in Das Kapital
by saying that neither Aristotle, "the greatest
thinker of antiquity," nor other thinkers could
comprehend [t]he economic paradox, that
the most powerful instrument for shortening
labour-time, becomes the most unfailing means for
placing every moment of the labourer’s time and
that of his family, at the disposal of the
capitalist for the purpose of expanding the value
of his capital. (Marx 1867, 335; Engl. transl.
2015, 278) The
use of machines based on steam-power, electricity
or digital technology creates new forms of the
division of labour under, according to Marx but
also to Norbert Wiener, new slave-like conditions.
Wiener wrote in 1950: Let us remember that the
automatic machine, whatever we think of any
feelings it may have or may not have, is the
precise economic equivalent of slave labor. Any
labor which competes with slave labor must accept
the economic conditions of slave labor. It is completely clear that
this will produce an unemployment situation, in
comparison with which the present recession and
even the depression of the thirties will seem a
pleasant joke. (Wiener 1989, 162) It
was Joseph Weizenbaum who in his book Computer
Power and Human Reason (Weizenbaum 1976)
also raised fundamental ethical issues of computer
technology. The book was published ten years after
his famous ELIZA — A Computer Program for
the Study of Natural Language Communication
between Man and Machine (Weizenbaum 1966).
Weizenbaum opus magnum was a result of
self-critical thinking. Herbert Simon published
his The Sciences
of the Artificial in 1969. In 1972 Hubert L.
Dreyfus published the influential book What
Computers Can't Do. The Limits of Artificial
Intelligence (Dreyfus 1972). Other studies
dealing with AI followed, such as Margaret Boden Artificial
Intelligence and Natural Man (1977), Aaron
Sloman The Computer Revolution in Philosophy
(1978), Daniel C. Dennett Brainstorms
(1978); Pamela
McCorduck: Machines Who Think (1979); Don Ihde Technics and Praxis.
A Philosophy of Technology (1979; John R.
Searle Minds, Brains, and Programs (1980);
Deborah G. Johnson Computer Ethics (1985);
Terry Winograd & Fernando Flores Understanding
Computers and Cognition (1986); P. S. Churchland Neurophilosophy
(1986); Margaret
Boden (ed.): The Philosophy of Artificial
Intelligence (1990) A
number of these scholars were deeply influenced,
as I was, by the traditions of hermeneutics and
phenomenology. Those who stand out include Hubert
Dreyfus ― I had the privilege to meet him in 1992
(Capurro 2018a) ― and Terry Winograd and Fernando
Flores. Winograd and Flores attracted my attention
on AI in the eighties and early nineties after I
published my post-doctoral thesis Hermeneutics
of Specialized Information
(Capurro 1986). The work analyzed the relationships
between human understanding and interaction with
computer-based information storage and retrieval. In
1987 I was invited by German philosophers Hans
Lenk and Günter Ropohl to write a contribution on
the emerging field of computer ethics for a reader
on Technology and Ethics published by
Reclam well known for their little yellow
paperback books. The book included contributions
by Theodor W. Adorno, Hans Jonas, Kenneth D.
Alpern and Alois Huning dealing particularly with
ethical issues of engineering (Lenk & Ropohl
1987). Later on, I wrote two papers on Joseph
Weizenbaum whom I had the privilege to meet
several times (Capurro 1987, 1998). In
1987 I made a short presentation on AI at the 14th
German Congress of Philosophy. The argument
was, following Dreyfus, that while AI is based on
explicit formalized rules, human understanding is
incarnate in a body, sharing with others a common
world and related to a situation that it
transcends (Capurro 1987a). Winograd and Flores
addressed this difference with regard to the
design of computer systems that I analyzed in a
paper published in the Informatik-Spektrum,
the German journal of Computer Science (Capurro
1987b, 1992; see also: Capurro 1988, 1988a, 1989,
1990). In
1988 I participated at the 18th World Congress
of Philosophy held in It
was the Jesuit theologian Karl Rahner (1904-1984)
who in his book Geist in Welt (Spirit in
the World) (Rahner 1957) opened my eyes about the
relevance of angelology when dealing with the
nature of human knowledge. According to Rahner,
who develops his argument based on a detailed
analysis of Thomas Aquinas quaestio 89 of
the Summa Theologiae, human knowing cannot
be compared with God's knowledge as it is completely
different to ours. A comparison as thought
experiment, with angel's knowing lets the differentia
specifica of human knowledge shine forth.
Being both creatures, humans and angels have
knowledge in common as tertium comparationis.
But humans are incarnated spirits that need to go
back to sensory experience (conversio ad
phantasmata) after the process of
abstraction (abstractio) of the forms,
which is not the case with angels. The view of
divine perennial substances (aidiai ousiai)
separated from the finite material natural
processes of becoming and decay ―the Latin terminus
technicus being intelligentiae separatae―
goes back to Aristotle's Metaphysics Book
Lamda (Aristotle 1973). Aquinas' epistemological
reflection on angels and humans is based on Latin
translations of Islamic philosophers such as
Avicena, who themselves translated and commented
Greek classics. In this context of
Greek-Persian-Arabic-Latin tradition, I first
discovered the Greek origin (eidos, idea,
typos, morphe) and the Latin roots of the
term information (informatio) as well as its relation to
the concepts of messenger and message (angelos/angelia)
(Capurro 1978, 2018; Capurro & Holgate 2011).
Aquinas' angelology has its source in the Bible,
according to which angels are immortal but not
eternal creatures. In many cases they are
God's messengers and not just, as in the case of
Aristotle, perennial star movers (Capurro 1993,
2017). Living
in a secular and technological age, the idea of
creating artificial intelligences that would even
supersede the human one can be considered in some
way in parallel to ancient and medieval thinking
about divine and human intelligence, the relata
being now human (natural) and artificial
(digital). Artificial intelligences would not only
enhance but eventually supersede human
intelligence as the debate on transhumanism and
singularity shows (Eden et al. 2012) I developed
in the early nineties a critique of authors like
Hans Moravec (1988) and Raymond Kurzweil (1999) on
what I called cybergnosis (Capurro &
Pingel 2002) an issue that raised my attention in
the early nineties as related to the analogy
between angels and computers (Capurro 1993; 1995,
78-96). According
to Blaise Pascal: "L'homme n'est ni ange ni bête,
et le malheur veut que qui veut faire
l'ange fait la bête." (Pascal 1977, 572) The
English translation "Man is neither angel
nor beast, and whoever wants to act as an
angel, acts the beast" does not reflect the double
meaning of the verb "faire" ― to act but also to
make ―, although this second meaning is not the
one addressed by Pascal. Many years later, in
2010, I was invited to participate at an
international conference on Information
Ethics: Future of Humanities, organized by
the Oxford Uehiro Centre for Practical
Ethics, the Uehiro Foundation on Ethics
and Education, and the Carnegie Council for
Ethics in International Affairs, that
was held at St Cross College, Oxford. I presented
a paper with the title Beyond Humanisms.
My argument was that Western humanisms rest on a
fixation of the humanum. They are
metaphysical, although they might radically differ
from each other. I addressed the debate on trans-
and posthumanism as follows: The question about the
humanness of the human and its "beyond" is not any
more concerned with the relationship between the
human and the divine as was the case with the
classical humanisms in Antiquity, Renaissance and
Reformation, nor with the self-introspection of
the subject as in Modernity, but with the
hybridization of the human, particularly through
the digital medium as well as through the
possibilities to change the biological substrate
of the human species. A common buzz-word for these
issues is "human enhancement." (Capurro 2012b,
49-50). The
difference
between strong and weak AI was one of the main
issues discussed in the nineties (Capurro 1993).
It dealt with the question how far intelligence
can be separated from its biological
substrate being (or becoming) a product of
programming in the digital medium. The strong
dualistic thesis became more and more problematic
considering that matter matters, that is to say,
that natural intelligence is intrinsically related
to its embodiment and that a bottom-up procedure
must take the issue of the medium seriously or
otherwise consider that what is crucial with the
concept of artificial intelligence is not the
asymptotic and unachievable approach to human
intelligence but the difference created when
working with another medium. When dealing with
artificial intelligence(s) a key issue is to
clarify the concept of artificiality. It was the
Italian sociologist Massimo Negrotti who opened my
eyes on this matter (Negrotti 1999, 2002, 2012;
Capurro 1995a). The dualism between hard- und
software, underlying the strong AI thesis finds
its counterpart in the metaphysical dualism
between human and angelic intelligences. This
dualism is portrayed by Lewis Carroll in the
dialogue between II. Distributed Intelligence
Social,
ethical
and legal issues of AI that were mainly object of
academic discussions exploded in a global context
that made manifest different research agendas,
cultural backgrounds and every-day lifeworlds of
different societies. As the Internet took root so
did concerns for an ever-growing schism between
those with and those without access to its
benefits. This was soon to be known as the digital
divide. At the same time, it became evident
that this was not only a technical issue. Basic
questions related to privacy and democracy were at
stake. The World Summit on the Information Society
(WSIS) organized by the United Nations that took
place 2003 in Ethical
issues
of AI were discussed at the beginning of the new
century in two EU projects in which I
participated, namely ETHICBOTS (2005-2008) and
ETICA (2009-2011). The ETHICBOTS (Emerging
Technoethics of Human Interaction with
Communication, Bionic and Robotic Systems)
project took place under the leadership of the
Italian philosopher Guglielmo Tamburrini,
University "Federico II", 1. The triaging categories
of imminence, novelty, and social pervasiveness to
assess the urgency of and the need for addressing
techno-ethical issues. 2. A variety of ethical
approaches and perspectives to represent the
ethical richness of the European culture and
tradition. (Capurro, Tamburrini, Weber 2008, 14) The
results of the project included a paper on Ethical
Regulations on Robotics in Europe
(Nagenborg, Capurro, Weber, Pingel 2008) as well a
book on Ethics and Robotics (Capurro &
Nagenborg 2009). In a contribution to
the workshop L'uomo e la macchina. Passato e
presente (Pisa 1967-2007) organized by the
Philosophy Department of the
The
ETICA (Ethical Issues of Emerging ICT
Applications) project (2009-2011), under the
leadership of the German philosopher Bernd Carsten
Stahl, dealt with the following technologies:
affective computing, ambient intelligence,
artificial intelligence, bioelectronics, cloud
computing, future internet, human-machine
symbiosis, neuroelectronics, quantum computing,
robotics, virtual/augmented reality. In the Ethical
Evaluation by Michael Nagenborg and myself
we summarized the ethical issues of AI as follows: 1.
Human Dignity: The visions
of ―artificial persons or ―artificial (moral)
agents with corresponding rights are to be seen
as being in contrast to the emphasis given to
human rights in the European Union. This might
be even stronger the case with anthropomorphic
robots. 2.
Autonomy and
responsibility: The question of 'machine
autonomy‘ does give rise to questions about
human autonomy and responsibility. 3.
Privacy: AI is one of the
major building blocks of surveillance society. 4.
Cultural Diversity:
Artificial moral agents with a strong bias
towards a certain cultural identity might be in
contrast to a pluralistic society. 5.
Inclusion: AI might
contribute in making ICT more accessible to many
people, but it might also foster the digital
divide. 6.
Access to the labour
market: AI systems are likely to replace humans
in certain contexts 7.
Precautionary Principle:
The precautionary principle might be invoked
with regard to military applications of AI. 8.
Principle of Transparency:
The potential (bi-directional) dual use of AI
systems calls for paying attention to the
funding and future use of R&D in the field. 9.
Likelihood of Ethical
Issues: High. (Nagenborg & Capurro 2012,
20-21) In
the recommendations we stated [t]he current research on
Computer and Information Ethics is very much
human-centred, which means that there is little to
none research on animals or environmental issues.
Therefore, we would like to encourage our
colleagues to take some inspirations from the
Ethics of the European Institutions and to
overcome the bias towards humans. (Nagenborg &
Capurro 2012, 75) In
the Annex to this deliverable Lisa Stengel
and Michael Nagenborg analyzed the question on how
does a technology become an ethical issue at the
level of the EU, as in the case of the work done
by the European Group on Ethics in Science and
New Technologies (EGE) to the European
Commission as well as by the National Ethics
Committees. They remarked: "Since the European
Community moved from a mere economic community to
a political According
to The Charter of Fundamental Rights of the
European Union such "community of values"
consist of human dignity, freedom, democracy,
equality, the rule of law and the respect for
human rights (Nagenborg & Stengel 2012, 1-2).
The EGE issued an Opinion in 2005 on Ethical
Aspects of ICT Implants in the Human Body
(EGE 2005) of which Stefano Rodotà and myself were
the rapporteurs that raised questions related to
AI from the perspective of the European "community
of values." We summarized the issues as follows: “We shall not lay hand upon
thee”. This was the promise made in the Magna
Charta – to respect the body in its entirety: Habeas
Corpus. This promise has survived
technological developments. Each intervention on
the body, each processing operation concerning
individual data is to be regarded as related to
the body as a whole, to an individual that has to
be respected in its physical and mental integrity.
This is a new all-round concept of individual, and
its translation into the real world entails the
right to full respect for a body that is nowadays
both physical and electronic. In this new world,
data protection fulfils the task of ensuring the “habeas
data” required by the changed circumstances
– and thereby becomes an inalienable component of
civilisation, as has been the history for habeas
corpus. At the same time, this is a
permanently unfinished body. It can be manipulated
to restore functions that either were lost or were
never known – only think of maiming, blindness,
deafness; or, it can be stretched beyond its
anthropological normality by enhancing its
functions and/or adding new functions – again, for
the sake of the person’s welfare and/or social
competitiveness, as in the case of enhanced sports
skills or intelligence prostheses. We have to
contend with both repairing and capacity enhancing
technologies, the multiplication of body-friendly
technologies that can expand and modify the
concept of body care and herald the coming of
'cyborgs' – of the posthuman body. “In our
societies, the body tends to become a raw
material that can be modelled according
to environmental circumstances”. The
possibilities of customised configuration
undoubtedly increase, and so do the opportunities
for political measures aimed at controlling the
body by means of technology. The downright reduction of
our body to a device does not only enhance the
trend ―already pointed out― towards turning it
increasingly into a tool to allow continuous
surveillance of 29 individuals. Indeed,
individuals are dispossessed of their own bodies
and thereby of their own autonomy. The body ends
up being under others’ control. What can a person
expect after being dispossessed of his or her own
body? (EGE 2005, 39-30) Stefano
Rodotà
(1933-2017), a famous Italian jurist and
politician, published the opus magnum Treatise
of Biolaw (Trattato di Biodiritto)
edited together with Paolo Zatti, the first volume
being Field and Sources of Biolaw (Ambito
e Fonti del Biodiritto) edited by
Mariachiaria Tallacchini and himself. In a
comprehensive contribution on "the new habeas
corpus" Rodotà who, as far as I know, first
coined the concept of "habeas data" used in the
2005 EGE Opinion, analyzes key ethical and legal
issues of the digitization of the human body
(Rodotà 2010, 169-230). According to Rodotà, the
basic right to informational self-determination,
overcomes the dualism between habeas corpus,
dealing with the protection of the physical body,
and habeas data, dealing with the
protection of the electronic one. There are not
different subjects to be protected but a common
object, namely "the person in its different
configurations, conditioned little by little in
its relation with the technologies that are not
only the electronic ones." (Rodotà 2010, 229, my
translation). The EGE issued two Opinions dealing
with Ethics of Information and Communication
Technologies (EGE 2012) and with Ethics
of Security and Surveillance Technologies
(EGE 2014). The
development
and use of AI devices, particularly robots, became
accelerated and diversified due to the impact of
the Internet in all areas of social life. Turing's
question in 1950 "can machines can think?" turn
more and more on the intrinsic relation of the
practical issue concerning their "intelligent
behavior," Turing's formulation from 1948.
Building robots and the reflection upon them
becomes a social or moral issue, that is
to say, it concerns contexts of application with specific values, customs
and rules of behavior (Latin mores) and a
critical reflection thereupon. In 2009 the Center
for Cybernetics Research (Cybernics) at the The real intercultural
ethical challenge in Japan is, I think, to ponder
how robots become part of Japanese interplay
between Japanese minds, which differs from the
interplay in the "Far West," ― particularly as it
is based on the Buddhist tradition of
'self-lessness' or Mu ― sharing a common
Ba [place]" (Nakada & Capurro 2013, 14;
see also Nakada, Capurro, Sato 2017; Capurro
2017a; Tzafestas 2016, 155-175). The
nature of the self is indeed a key issue when
dealing with the question about the interaction
between artificial intelligences and human beings,
being the case that artificial intelligences might
mimic a self but, in fact, they are not (so far) a
who but a what. In an interdisciplinary project
organized by ACATECH (German Academy for Science
and Engineering), on the question of privacy and
trust on the internet, the ethics group, composed
of the Australian philosopher Michael Eldred, the
German lawyer Daniel Nagel and myself, developed a
view of privacy and trust based on this
difference. In the introduction to the ethics
chapter of the final report we stated: The concept of privacy
cannot be adequately determined without its
counterpart, publicness. Privacy and publicness
are not properties of things, data or persons, but
rather ascriptions dependent upon the specific
social and cultural context. These ascriptions
relate to what a person or a self (it may also be
several selves) divulges about him- or herself. A
self, in turn, is not a worldless, isolated
subject, but a human being who is and understands
herself always already connected with others in a
shared world. The possibility of hiding, of
displaying or showing oneself off as who one is,
no matter in what way and context and to what
purpose, is in this sense, as far as we know,
peculiar to human beings, but precisely not as the
property of a subject, but rather as a form of the
interplay of a human being's life as shared with
others. (Capurro, Eldred, Nagel 2012, 64) In
a special section of this chapter I analyzed
intercultural aspects of digitally mediated
whoness, privacy and freedom in the Far East,
Latin America and III. Natural and Artificial Intelligences
Since
then, robots have become increasingly socially
relevant beyond industry applications and an
entirely new conception, the Internet of Things,
has in conjunction with RFID technology become a
window into future possibilities. The term,
Internet of Things, was
coined by Kevin Ashton in 1999. Asthon wrote in
2009: We're physical, and so is
our environment. Our economy, society and
survival aren't based on ideas or
information—they're based on things. You can't eat
bits, burn them to stay warm or put them in your gas
tank. Ideas and
information are important, but things matter much
more. [...] We need to empower computers with
their own means of gathering information, so they
can see, hear and smell the world for themselves,
in all its random
glory. RFID and sensor technology
enable computers to observe, identify and
understand the world—without the limitations of
human-entered data. Ten years on, we've made
a lot of progress, but we in
the RFID community need to understand
what's so important about what our technology
does, and keep advocating for it. It's not just a
"bar code on steroids" or a way to speed up
toll roads, and we must never allow our vision to
shrink to that scale. The Internet of
Things has the potential to change the world, just
as the Internet did. Maybe
even more so. (Ashton 2009) The
Internet was once an idea and is now a thing. The
Internet of Things is an idea in the process of
becoming a thing. It goes beyond the traditional
view of artificial intelligent things called
robots, envisaging the transformation of all kinds
of natural and artificial things becoming
digitally networked, although this might not be
necessary the case. It is, I believe, the powerful
IT industry with its insatiable hunger for digital
data from users viewed as customers which
is strongly behind the idea and the reality of the
Internet of Things. Intelligent things might be
also designed and used as stand-alone but this is
bad news for IT giants. This is also true of
humans as far as they might design their lives as
being always online, what is being called
onlife. Things and humans would eventually
turn just into a bunch of digital data. A
"who" might be confused with a "what" and a "what"
might look like a "who." The difference between
living online and offline does not mean a kind of
Platonic dualism of separate worlds, but concerns the freedom of
choice, life design and the protection of privacy.
George Orwell's Animal Farm (Orwell 1989)
might turn into a digital farm. Without
ideas nothing would change in the human world and,
what is even more relevant, ideas make possible
for us, humans, to reflect on the foreseen and
unforeseen changes that things bring about for as
well as for non-human living and non-living,
natural and artificial beings with whom we share a
common world. What
are things such as the Internet or the Internet of
Things? They are, prima facie, just tools.
What is a tool? In Heidegger's famous tool
analysis in Being and Time he coined the
terms "readiness-to-hand" and "presence-at-hand."
When tools break down, that is to say, when they
lose there readiness-to-hand, the worldly context
or structure of references to which they
implicitly belong, becomes manifest (Heidegger
1987, 102ff; Capurro 1992). In the introduction to
Understanding Computers and Cognition
written sixty years after Heidegger's work,
Winograd and Flores write how: "[...] in designing
tools we are designing ways of being" (Winograd
& Flores 1986, xi). It is not due to the
Internet that things are embedded in semantic and
pragmatic networks but the other way round. It is
because they, that is to say, we are from scratch
embedded in such networks that we are able to
design a digital network and interact with them
and between ourselves. This means a paradigm
change with regard to the way modernity conceived
things as objects in the so-called "outside world"
to be the correlate to an encapsulated subject.
Not only humans and natural things but also
artificial things become autonomous and networked
agents in the digital age. But what does autonomy
and action mean in each case? In a contribution to
the panel on Autonomic Computing, Human
Identity and Legal Subjectivity hosted by
Mireille Hildebrandt and Antoinette Rouvroy at the
International Conference: Computers, Privacy &
Data Protection: Data Protection in a Profiled
World that took place in Brussels in 2009 I wrote: At today’s early stage of
these breath-taking developments, it is difficult
to give a typology of the new kinds of digital and
living agents and the theoretical and practical
challenges arising from them. From a broad
perspective, these challenges are related, on the
one hand, to all kinds of robots, starting with
the so-called softbots (digital robots) as well as
to all kinds of physical robots – including the
(still speculative) nanobots based on
nanotechnology – with different degree of
complexity, including all forms of imitation of
human and non-human living beings (bionics). On
the other hand, there are the possibilities
arising from the hybridization between ICT with
non-human as well as with human agents, for
instance. ICT, or other technologies, can become
part of living organisms, for instance as implants
(EGE 2005), or vice versa. In this case, humans
become (or have already become) "cyborgs" (Hayles
1999). Finally, synthetic biology allows the
artificial construction of new as well as the
genetic modification of living beings (EGE 2009;
Karafyllis 2003). (Capurro 2012a, 484) The
pervasive
use of AI raises the question of the very basic
understanding of technology as not being purely
instrumental but shaping the relation between man
and world. It belongs to what I call digital
ontology, that is to say, the interpretation
of the being of beings as well as of being itself
from a digital perspective as a possible one.
This ontological perspective might turn into a
metaphysical world view or, politically speaking,
into an ideology in case it becomes dogmatic,
immunizing itself from critique (Capurro 2006,
2008, 2017c). The
Finish information security researcher Kimmo
Halunen recently wrote a contribution with the
title "Even artificial intelligences can be
vulnerable, and there are no perfect artificial
intelligence applications" (Halunen 2018). I asked
him if he was the first one to use the plural noun
"artificial intelligences" but he could not
clarify the issue. In any case, the use of the
plural noun might help to demystify the big noun
AI by paying attention to a diversity of
"artificial intelligence applications" making a
difference with regard to other kinds of natural
or artificial ones. Halunen writes: Artificial intelligence has
its own special characteristics that also make
other kinds of attacks against these systems
possible. Because an artificial
intelligence usually attempts some kind of
identification and then makes decisions based on it,
the attacker may want to trick the artificial
intelligence. This problem has been encountered in
the fields of pattern and facial recognition in
particular. Last year,
it was published that Google’s artificial
intelligence algorithm was tricked into
classifying a turtle as a rifle. As for facial
recognition, makeup and hairstyles that fool
facial recognition algorithms have been
developed. Of course, people also make mistakes in
identifying objects or faces, but the methods used
for identification by an artificial intelligence
are very different. This means that the
errors made by an artificial intelligence seem
bizarre to humans, because even small children can
tell a turtle from a rifle, and these camouflage
methods do not work against people. In an automated environment, in which
artificial intelligence makes the decisions, such
deceptions can be successful and may help the
attacker. (Halunen 2018) What
moves artificial intelligences? Energy and human
needs, beliefs and desires reified in digital
algorithms (Capurro 2019). It is not primarily a
question whether machines can think or how far
they can be like human intelligence or
even better ― other machines and living
beings supersede humans in many regards ― but on
how we might be able to live with or without them
in different contexts in the life-world.
Artificial intelligences or, for that matter,
computer programs can break down as Winograd and In
my contribution to the international
conference: Artificial Intelligence &
Regulation, organized by LUISS (Libera Università
Internazionale degli Studi Sociali Guido Carli)
held in Algorithms are implicitly or
explicitly designed within the framework of social
customs. They are embedded in cultures
from scratch. According to the phenomenologist Lucas
Introna, creators and users
are "impressed" by algorithms (Introna
2016). The "impressionable subject," however, is not
the modern subject detached from the so-called
outside world, but a plurality of selves sharing a
common world that is algorithmically intertwined.
What is ethically at stake when dealing with
algorithms becomes part of human mores? What is
the nature of this entanglement between
human mores and algorithms? To what extent
can it be said that algorithms are, in fact,
cultural? Who is responsible for the decisions taken
by algorithms? To what extent is this
anthropomorphic view on algorithms legitimate in
order to understand what algorithms are? These are
some foundational questions when dealing with the
ethics of algorithms that is in an incipient state
(Mittelstadt et al. 2016). [...] The present casting of
ourselves as homo digitalis (Capurro
2017) opens the possibility of reifying ourselves
algorithmically. The main ethical challenge for
the inrolling digital age consists in unveiling
the ethical difference, particularly when
addressing the nature of algorithms and their
ethical and legal regulation. (Capurro 2019,
forthcoming) The
debate on driverless cars sometimes obfuscates
basic questions on mobility that affect societies
and individuals in the 21st century. At least some
parts of industry seem to be interested in these
issues though. I received an invitation from the Verband
der Automobilindustrie (VDA) (German
Association of the Automobile Industry) to a
dialogue with the CEO of Continental AG,
Dr. Elmar Degenhart. The meeting took place in Last
but not least, I would like to mention the current
issue of the International Review of
Information Ethics dealing with "Ethical
Issues of Networked Toys," the guest editors being
Juliet Lodge and Daniel Nagel. In their
introductory remarks they write: Networked toys - Artificial
guardians for little princesses or demonic plastic
princes? Networked toys dominate the shelves in
toy stores at a time when neither their real
benefits nor their potentially latent dangers have
been fully explored. Do hyper-connected toys
transform the relationship between adults, the
child and its environment? Do they shape their
minds and predispose them to seek convenience and
speedy responses rather than rely on their own
autonomous capacities for critical thought? Questions such as who really
is in control arise, both of the toys ―parents,
third parties or even the toddlers themselves― and
of data (including biometrics) that might be
collected for unclear purposes and opaque
destinations. For what specific or linkable
purpose and above all where and to whom is data
transmitted? What ethical considerations should be
addressed? Is there an actual benefit
for the children themselves? Do hyper connected
devices and robo-toys teach them how to handle
technology or does it erode their capacity for
autonomous reflection as speed and convenience are
prioritised in their on-line and off-line worlds?
Do such toys presage fundamental transformation of
childhood and the imagined and physical worlds?
(Lodge & Nagel 2018)
Conclusion: Enlightening the Digital
Enlightenment What
is the task of Information ethics in the era of
artificial intelligences? Answer: enlightening the
digital enlightenment which follows the path of
thought of the philosophers Max Horkheimer and
Theodor W. Adorno in their influential collection
of essays published in revised edition in 1947
with the title Dialectics of Enlightenment
(Dialektik der Aufklärung) (Horkheimer
& Adorno 1975). A main insight of this book is
the ambivalence of the project(s) of enlightenment
coming from the social revolutions of the
nineteenth century
but going back to the dialectics between mythology
and science that characterizes European
Enlightenment particularly in the eighteenth
century. Enlightenment must take care of this
ambivalence that might revert digital
enlightenment into digital mythology. This
narrative shows the changing meanings of the
concept of artificial intelligence(s) since the
middle of the last century depends both on the
state-of-the-art of digital technology as well as
of the different contexts in which it has been
used. Looking back to my personal experiences
since the early seventies and the changing
academic debates in the years that followed, I
dare no forecast beyond what appears today as
challenges in the near future. The
task of taming the digital chaos through different
kinds of national and international regulations is
still very much in the early stages and is
dependent on how the awareness of these issues
take root across the globe. Enlightened awareness
addresses several problems such as ecological
issues, sustainability, taxation, state
regulation, fake news, cyber wars, digital
capitalism, digital colonialism, social justice,
surveillance society, digital addiction, the
future of work, and who we are as cybercitizens in
the digital age. Toni Samek and Lynette Schultz
organized an Information Ethics Roundtable at the
Who
should take care of the enlightenment of the
digital enlightenment? Answer: universities,
research institutions, scientific associations,
governments and the media. As a paramount example
of a scientific association leading in the field
of enlightening the digital enlightenment I would
like to mention the Institution of Electrical and
Electronics Engineers Standards Association
(IEEE). It brought about the Global Initiative of
Autonomous and Intelligent Systems, under the
leadership of managing director Konstantinos
Karachalios. John Havens took care as Executive
Director of the Ethical Considerations in
Artificial Intelligence and Autonomous Systems.
Jared Bielby managed the Committee for Classical
Ethics. The final report represents the collective
input of several hundred participants from six
continents who are thought to be leaders from
academia, industry, civil society, policy and
government (IEEE 2016). I quote the introduction in
extenso: The task of the Committee
for Classical Ethics in Autonomous and Intelligent
Systems is to apply classical ethics methodologies
to considerations of algorithmic design in
autonomous and intelligent systems (A/IS) where
machine learning may or may not reflect ethical
outcomes that mimic human decision-making. To meet
this goal, the Committee has drawn from classical
ethics theories as well as from the disciplines of
machine ethics, information ethics, and technology
ethics. As direct human control over tools
becomes, on one hand, further removed, but on the
other hand, more influential than ever through the
precise and deliberate design of algorithms in
self-sustained digital systems, creators of
autonomous systems must ask themselves how
cultural and ethical presumptions bias
artificially intelligent creations, and how these
created systems will respond based on such design.
By drawing from over two thousand years’ worth of
classical ethics traditions, the Classical Ethics
in Autonomous and Intelligent Systems Committee
will explore established ethics systems,
addressing both scientific and religious
approaches, including secular philosophical
traditions such as utilitarianism, virtue ethics,
deontological ethics and religious and
culture-based ethical systems arising from
Buddhism, Confucianism, African Ubuntu traditions,
and Japanese Shinto influences toward an address
of human morality in the digital age. In doing so
the Committee will critique assumptions around
concepts such as good and evil, right and wrong,
virtue and vice and attempt to carry these
inquiries into artificial systems decision-making
processes. Through reviewing the philosophical
foundations that define autonomy and ontology, the
Committee will address the potential for
autonomous capacity of artificially intelligent
systems, posing questions of morality in amoral
systems, and asking whether decisions made by
amoral systems can have moral consequences.
Ultimately, it will address notions of
responsibility and accountability for the
decisions made by autonomous systems and other
artificially intelligent technologies. (IEEE 2016) At
the political level I highlight and support the
recent activities of the European Union,
particularly the Communication from the Commission
to the European Parliament, the European Council,
the Council, the European Economic and Social
Committee and the Committee of the Regions:
Artificial Intelligence for
Aristotle (1973).
Metaphysica. Ed. W. Jaeger. Aristotle (1950a). Physica.
Ed. W. D. Ross. Ashton, K. (2009). The
'Internet of Things' Thing. In the real world,
things matter more than ideas. In: RFID Journal. Boden, M. A. (1977).
Artificial Intelligence and Natural Boden, M. A. (ed.) (1990).
The Philosophy of Artificial Intelligence.
Capurro,
R.
(1986). Hermeneutik der Fachinformation
(Hermeneutics of Scientific Information).
Freiburg/München: Alber. Capurro, R. (1987). Zur
Computerethik. Ethische Fragen der
Informationsgesellschaft (Computer Ethics. Ethical
Questions of the Information Society). In: H. Lenk,
G. Ropohl (eds.): Technik und Ethik. Stuttgart:
Reclam, 259-273. Capurro, R. (1987a). Zur
Kritik
der künstlichen Vernunft. Oder über die Einheit und
Vielheit des Intellekts (A Critique of Artificial
Intelligence. On the Unity and Diversity of the
Intellekt). In: O.
Marquard, P. Probs (eds.): Proceedings of the 14th German Congress of
Philosophy,
Capurro, R. (1988). Die
Inszenierung des Denkens. In: Mensch – Natur –
Gesellschaft Jg. 5, Heft 1, 1988, 18-26. Capurro, R. (1988a) Von der
Künstlichen Intelligenz als einem ästhetischen
Phänomen. Eine kritische Reflexion in Kantischer
Absicht (Artificial Intelligence from an Aesthetic
Pespective. A Kantian Critique) Capurro,
R.
(1989). Stellt die KI-Forschung den Kernbereich der
Informatik dar? (Is AI
Research the Core of Computer Science?). In: J.
Retti & K. Leidlmaier (eds.): 5. Österreichische
Artificial-Intelligence-Tagung: Igls/Tirol, 28.-31.
März 1989, Berlin: Springer 1989, S. 415-421. Capurro, R. (1989a). Der
Kongress. Rafael Capurro über den XVIII. Weltkongress
für Philosophie in Brighton, GB (21.-27. August 1988) (The Congress.
Rafael Capurro on the 18th World Congress of
Philosophy in Capurro,
R.
(1990). Ethik und Informatik (Etthics and
Informatics) In: Informatik-Spektrum 13, 311-320. Capurro, R. (1992)
Informatics and Hermeneutics. In: Ch. Floyd, H. Züllighoven, R.
Budde, R. Keil-Slawik (eds.): Software
Development and Reality Construction. Berlin:
Springer, 363-375. Capurro,
R.
(1993). Ein Grinsen ohne Katze. Von der
Vergleichbarkeit zwischen 'künstlicher Intelligenz'
und 'getrennten Intelligenzen' (A Grin without a
Cat. On the Comparison between AI and Separate
Intelligences). In: Zeitschrift für philosophische
Forschung, 47, 93-102. Capurro,
R.
(1995). Leben im Informationszeitalter (Living in
the Information Age). Berlin: Akademie Verlag. Capurro,
R.
(1995a). On Artificiality.
Working paper published by IMES (Istituto
Metodologico Economico Statistico) Laboratory for
the Culture of the Artificial, Università di
Urbino, IMES-LCA WP.
Capurro, R. (2006). Towards
an Ontological Foundation of Information Ethics.
In: Ethics and Information
Technology 8 (4) 175-186 Capurro, R. (2008). On
Floridi's Metaphysical Foundation of Information
Ecology. In: Ethics and Information
Technology, 10 (2-3) 167-173. Capurro,
R. (2009). Ethics and Robotics. In: R. Capurro, M. Nagenborg
(eds.). Ethics and Robotics. Heidelberg:
Akademische Verlagsgesellschaft, 117-123. Capurro,
R.
(2010). Digital Ethics. In:
The Capurro, R. (2010a). Ethics
and Public Policy in Capurro, R. (2011). The
Quest for Roboethics: A Survey. In: T. Kimura, M.
Nakada, K. Suzuki, Y. Sankai (eds.): Cybernics
Technical Reports. Special Issue on Roboethics.
University of Tsukuba, CYB-2011-001 - CYB-2011-008,
39-59. Capurro, R. (2012). Intercultural Aspects of
Digitally Mediated Whoness, Privacy and Freedom.
In: R. Capurro, R.,
M. Eldred, D. Nagel: ‘IT and Privacy from an
Ethical Perspective: Digital Whoness: Identity,
Privacy and Freedom in the Cyberworld. In:
Johannes Buchmann (ed.) Internet Privacy -
Eine multidisziplinäre Bestandsaufnahme. A
Multidisciplinary Analysis. Acatech
Studie, Berlin, 113-122. Capurro, R. (2012a). Towards
a Comparative Theory of Agents. In: AI & Society, 27 (4),
479-488. Capurro, R. (2012b). Beyond
Humanisms. In: T. Nishigaki, T. Takenouchi (eds.):
Information Ethics. The Future of the Humanities,
Capurro, R. (2017). Engel,
Menschen und Computer. Zur Relevanz der thomistischen
Engellehre für die philosophischen Anthropologie. (Angels, Humans and
Computers. On the Relevance of Tomas Aquinas'
Angelology for the Philosophical Anthropology). Capurro,
R. (2017a). Intercultural Roboethics for a Robot
Age. In: M.
Nakada, R. Capurro, K. Sato (eds.) (2017). Critical Review of
Information Ethics and Roboethics in East and
West. Master's and Doctoral Program in
International and Advanced Japanese Studies,
Research Group for "Ethics and Technology in the
Information Era"),
Capurro, R. (2017c). Homo
Digitalis.
Beiträge zur Ontologie, Anthropologie und Ethik der
digitalen Technik. (Homo
Digitalis. Ontological, Anthropological and
Ethical Contributions to Digital Technology). Capurro, R. (2017d).
Citizenship in the Digital Age. In: T. Samek, L.
Schultz (eds.): Information Ethics, Globalization
and Citizenship. Essays on Ideas to Praxis. Capurro,
R.
(2017e). Robotic Natives. Leben mit Robotern im 21.
Jahrhundert. In: R. Capurro: Homo Digitalis.
Beiträge zur Ontologie, Anthropologie und Ethik der
digitalen Technik. Heidelberg: Springer, 109-124. Capurro, R. (2018). Apud
Arabes. Notes on Greek, Latin, Arabic, Persian,
and Hebrew Roots of the Concept of Information. Capurro, R. (2018a). A Long-Standing Encounter.
In: AI & Society, 1-2. Capurro,
R. (2019). Enculturating Algorithms. In:
Nanoethics, 1-17. Capurro,
R. (2019a). Ethical
Issues of Humanoid-Human Interaction. In: A. Goswami, P.
Vadakkepat (eds.): Humanoid Robotics: A Reference.
Capurro, R., Eldred, M.,
Nagel, D. (2012): ‘IT and Privacy from an Ethical
Perspective: Digital Whoness: Identity, Privacy
and Freedom in the Cyberworld. In: J.
Buchmann (ed.) Internet Privacy – Eine
multidisziplinäre Bestandsaufnahme. A
Multidisciplinary Analysis. Acatech Studie,
Berlin, 63-142. Capurro, R., Holgate, J.
(eds.) (2011).
Messages and Messengers. Angeletics as an Approach
to the Phenomenology of Communication. München:
Fink. Capurro, R., Carroll, L. (1965). The
Annotated Churchland, P.S. (1986).
Neurophilosophy: Toward a unified science of the
mind-brain. Copeland, B. J.,
Proudfoot, D. (1999). Alan Turing's
Forgotten Ideas in Computer Science. In: Scientific
American, April, 99-103.
Capurro, R., Tamburrini, G.,
Weber J. (2008). ETHICBOTS, D 5: Techno-ethical
Case-Studies in Robotics, Bionics, and Related AI
Agent Information Technologies. Capurro, R., Nagenborg, M. (eds.) (2009). Ethics and Robotics. Heidelberg: Akademische Verlagsgesellschaft. Dennett, D. C. (1978). Brainstorms: Philosophical
Essays on Mind and Psychology. Detienne, M., Vernant, J.-P.
(1974). Les ruses de l'intelligence. La mètis
des Grecs. (Cunning Intelligence in Greek Culture
and Society). Paris: Flammarion. Dreyfus, H.L. (1972).
What Computers Can't Do. The Limits of Artificial
Intelligence. European Group on Ethics in
Science and New Technologies (EGE) (2005). Opinion
No. 20: Ethical Aspects of ICT Implants in the
Human Body. European Group on Ethics in
Science and New Technologies (EGE) (2009). Opinion
No. 25: European Group on Ethics in
Science and New Technologies (EGE) (2012). Opinion
No. 26: Ethics of information and communication
technologies. European Group on Ethics in
Science and New Technologies (EGE) (2014). Opinion
No. 28: Ethics of Security and Surveillance
Technologies. European
Union
(2018). Communication Artificial Intelligence for
Flores Morador, F. (2015). Encyclopedia of Broken
Technologies - The Humanist as Engineer (October
23, 2015). Broken Technologies. The
Humanist as Engineer. Ver 3.0, Lund University,
Revised 2015. Goswami, A., Vadakkepat, P.
(eds.) (2019) Humanoid Robotics: A Reference. Halunen, K. (2018). Even
artificial intelligences can be vulnerable, and
there are no perfect artificial intelligence
applications. VTT Technical Research Centre of Heidegger, M. (1987). Being and Time. Oxford: Blackwell. Transl. J. Macquarrie, E. Robinson. Horkheimer, M., Adorno, Th. W. (1975). Dialektik der Aufklärung (Dialectics of Enlightenment). Frankfurt am Main: Fischer. IEEE Standards Association
(2016). Ethically
Aligned Design: A Vision for Prioritizing Human
Well-being with Autonomous and Intelligent
Systems. Ihde,
Don
(1979). Technics and Praxis: A Philosophy of
Technology. Kurzweil, R.
(1999). The Age of Spiritual Machines. Lenk, H., Ropohl, G. (eds.) (1987). Technik und Ethik (Technics and Ethics). Stuttgart: Reclam. Karafyllis,
N.C.
(2003). Versuch über den Menschen zwischen Artefakt
und Lebewesen. (Essay
on Human Being between Artefact and Living Being).
Lodge, J., Nagel, D. (eds.)
(2018). Ethical Issues of Networked Toys. In:
International Review of Information Ethics, Vol.
27. McCorduck,
P.
(1979). Machines Who Think. A Personal Inquiry
into the History and Prospects of Artificial
Intelligence. McLuhan, M. (1962). The
Gutenberg Galaxy: the making of typographic man. Marx, K. (1867). Das
Kapital. Kritik der politischen Ökonomie.
Marx-Engels-Gesamtausgabe (MEGA), 1. Band. Berlin:
de Gruyter. http://telota.bbaw.de/mega/ (Engl.
quote from Capital. A Critique of Political
Economy, Transl. S. Moore, E. Aveling (2015).
www.marxists.org/archive/marx/works/download/pdf/Capital-Volume-I.pdf. Nakada, M. (2011). Ethical
and Critical Analysis of the Meanings of 'Autonomy' of Robot
Technology in the West and Nakada,
M.,
Capurro, R. (2013). An Intercultural Dialogue on
Roboethics. In: M.
Nakada, R.l Capurro (eds.): The Quest for
Information Ethics and Roboethics in East and
West. Research Report on trends in
information ethics and roboethics in Japan and the
West. ReGIS (Research
Group on the Information Society) and ICIE ( Nakada, M., Capurro, R.,
Sato, K. (eds.) (2017). Critical Review of
Information Ethics and Roboethics in East and
West. Master's and Doctoral Program in
International and Advanced Japanese Studies,
Research Group for "Ethics and Technology in the
Information Era"), Nagenborg, M., Capurro, R.,
Weber, J., Pingel, Chr. (2008). Ethical
Regulations on Robotics in Nagenborg, M., Capurro, R.
(2012). Ethical Evaluation. Part of D.3.2
(Evaluation Report) Negrotti, M. (1999). The
Theory of the Artificial. Negrotti, M. (2012). The
Reality of the Artificial. Nature, Technology and
Naturoids. Pascal, B. (1977). Pensées.
Rahner, K. (1957). Geist in Welt (Spirit in the World). München: Kösel (2nd ed.) Rodotà, S. (2010). Il nuovo
Habeas Corpus: La persona costituzionalizzata e la
sua autodeterminazione (The New Habeas Corpus: The
New Constitutionalised Person and its
self-determination). In: S. Rodotà, M. Tallachini
(eds.): Ambito e fonti del biodiritto. Milano:
Giuffrè, 169-230. Samek, T., Schultz, L.
(eds.): Information Ethics, Globalization and
Citizenship. Essays on Ideas to Praxis. Searle, J.R. (1980). Minds,
Brains, and Programs. In: Behavioral and Brain
Sciences 3 (3), 417-457. Simon, H. (1969). The
Sciences of the Artificial. Sloman, A. (1978). The
Computer Revolution in Philosophy. Philosophy,
Science and Models of Mind. The Stahl, B.C. (2016). Ethics
of European Institutions as Normative Foundation
of Responsible Research and Innovation in ICT. In:
M. Kelly, J. Bielby (eds.): Information Cultures
in the Digital Age. A Festschrift in Honor of
Rafael Capurro. Stengel, L., Nagenborg, M.
(2011). Reconstructing European Ethics. How does a
Technology become an Ethical Issue at the Level of
the EU? ETICA, Deliverable 3.2.2 Turing, A. M. (1950).
Computing Machinery and Intelligence. In: Mind,
Vol. LIX, Issue 235, 433-460 Turing, A. M. (1948).
Intelligent Machinery, report written for the
National Physical Laboratory. Tzafestas, S.G. (2016). Roboethics. A Navigating Overview. Heidelberg: Springer VDA (Verein der
Automobilindustrie) (2016a).
Dialogreihe Mobilität von
morgen. Gespräch mit dem
Vorstandsvorsitzenden der Continental AG, Dr.
Elmar Degenhart (Dialogue on Mobility Tomorrow with
the CEO of Continental AG, Dr. Elmar Degenhart). Wallach, W., Allen, C.
(2009): Moral Machines: Teaching Robots Right from
Wrong. Weinberg, A. E. (1963).
Science, Government, and Information. A report of
The President's Science Advisory Committee. Weizenbaum, J. (1966).
ELIZA — A Computer Program for the Study of
Natural Language Communication between Man and
Machine. In: Communications of the
Association for Computing Machinery 9, 36-45. Weizenbaum, J. (1976).
Computer Power and Human Reason: From Judgement To
Calculation. Wiener, N. (1965).
Cybernetics: or Control and Communication in the
Animal and the Machine. Wiener, N. (1989). The Human
Use of Human Beings. Cybernetics and Society. Winograd, T., Last update: February 9, 2022 |
|
|
Homepage | Research | Activities |
Publications | Teaching | Interviews |