Computer Science

Emerging and Converging Technologies

 Chapter 12 examines ethical aspects of three key emerging/converging technologies:

 ambient intelligence (AmI),

 nanocomputing,

 autonomous machines (AMs).

 This chapter also examines issues in the emerg- ing field of machine ethics, and it describes a “dynamic” ethical framework for addressing chal- lenges likely to arise from emerging technologies.

Converging Technologies and Technological Convergence

 Before examining specific emerging and converging technologies, we first consider what is meant by the concept of “technological convergence.”

 Howard Rheingold (1992) notes that technological convergence occurs when unrelated technologies or technological paths intersect or “converge unexpectedly” to create an entirely new field.

Technological Convergence (Continued)

 We should note that convergence in the context of cybertechnology is by no means new or even recent, but it has been ongoing since this technology’s inception.

 For example, in Chapter 1 we saw that early network technologies resulted from the convergence of computing and communications technologies in the late 1960s and early 1970s.

 Howard Rheingold notes that virtual-reality (VR) technology (examined in Chapter 11) resulted from the convergence of video technology and computer hardware in the 1980s.

Converging Technologies and Pervasive Computing

 Currently, cybertechnology is converging with non-cybertechnologies at an unprecedented pace.

 For example, cyber-specific technologies are converging with non-cybertechnologies, such as biotechnology and nanotechnology.

 Cybertechnology is also becoming pervasive and ubiquitous as computing devices now permeate both our public and private spaces (in connection with ambient-intelligence-related technologies).

1. Ambient Intelligence (AmI)

 Ambient Intelligence (AmI) is typically defined as a technology that

enables people to live and work in environments that respond to them in “intelligent ways” (Aarts and Marzano, 2003; Brey, 2005; and Weber et al., 2005).

 Review the example in the textbook of the (hypothetical) “intelligent home,” which incoprpoates key aspects of (and is made possible by) AmI.

 Also review Scenario 1-1 in the textbook, which illustrates an instance of the Internet of Tings (IoT) and which is made possible, in large part, by AmI.

AmI (Continued)

 AmI has benefited from, and has been made possible by, developments in the field of arti- ficial intelligence (AI), described in Chap. 11.

 AmI has also benefited from the convergence of three key technological components, which underlie it:

1) pervasive computing,

2) ubiquitous communication,

3) intelligent user interfaces (IUIs).

1.1 Pervasive Computing

 What, exactly, is pervasive computing?

 According to the Centre for Pervasive Computing, this technology is defined as a computing environment where information and communication technology are “everywhere, for everyone, at all times.”

 This technology is already integrated in our everyday environments – i.e., from “toys, milk cartons and desktops, to cars, factories, and whole city areas.”

Pervasive Computing (Continued)

 Pervasive computing is made possible because of the increasing ease with which circuits can be embedded into objects, including wearable, even disposable items.

 Computing has already begun to pervade many dimensions of our lives.

 For example, it now pervades the work sphere, cars, public transportation systems, the health sector, the market, and our homes (Bütschi, et al., 2005).

Pervasive Computing (Continued)

 Pervasive computing is sometimes also referred to as ubiquitous computing (or ubicomp).

 The term “ubiquitous computing” was coined by Mark Weiser (1991), who envisioned “omnipresent computers” that serve people in their everyday lives, both at home and at work.

1.2 Ubiquitous Communication

 For pervasive computing to operate at its full potential, continuous and ubiquitous commu- nication between devices is also needed.

 Ubiquitous communication aims at ensuring flexible and omnipresent communication between interlinked computer devices (Raisinghani et al., 2004) via:

 wireless local area networks (W-LANs),  wireless personal area networks (W-PANs),  wireless body area networks (W-BANs),  Radio Frequency Identification (RFID).

1.3 Intelligent User Interfaces (IUIs)

 Intelligent User Interfaces (or IUIs) have been made possible by developments in AI.

 Brey (2005) notes that IUIs go beyond traditional interfaces such as a keyboard, mouse, and monitor.

IUIs (Continued)

 IUIs improve human interaction with technology by making it more intuitive and more efficient than was previously possible with traditional interfaces.

 With IUIs, computers can “know” and sense far more about a person – including information about that person’s situation, context, or environment – than was possible with traditional interfaces.

IUIs (Continued)

 With IUIs, AmI remains in the background and is virtually invisible to the user.

 Brey notes that with IUIs, people can be

 surrounded with hundreds of intelligent networked computers that are “aware of their presence, personality, and needs.”

 but unaware of the existence of these IUIs in their environments.

Ethical and Social Issues Affecting AmI

 We briefly examine three kinds of ethical/social issues affecting AmI:

1. freedom and autonomy;

2. technological dependency;

3. privacy, surveillance, and the “Panopticon.”

1.3.1 Autonomy and Freedom Involving AmI

 We can ask whether human autonomy and freedom will be enhanced or diminished as a result of AmI technology.

 AmI’s supporters suggest humans will gain more control over the environments with which they interact because technology will be more responsive to their needs.

 Brey notes a paradoxical aspect of this claim, pointing out that “greater control” is presumed to be gained through a “delegation of control to machines.”

Autonomy and Freedom (Continued)

 Brey describes three ways in which AmI may make the human environment more controllable, noting that it can:

i. become more responsive to the voluntary actions, intentions, and needs of users;

ii. supply humans with detailed and personal information about their environment;

iii. do what people want without having to engage in any cognitive or physical effort.

Autonomy and Freedom (Continued)

 Brey also describes three ways that AmI can diminish the amount of control that humans have over their environments, where users may lose control because a smart object can:

i. make incorrect inferences about the user, the user’s actions, or the situation;

ii. require corrective actions on the part of the user;

iii. represent the needs and interests of parties other than the user.

1.3.2 Technological Dependency

 Consider how much we have already come to depend on cybertechnology in conducting so many activities in our day-to-day lives.

 In the future, will humans come to depend on the kind of smart objects and smart environments (made possible by AmI technology) in ways that exceed our current dependency on cybertechnology?

Technological Dependency (Continued)

 On the one hand, IUIs could relieve us of having to worry about performing many of our routine day-to-day tasks, which can be considered tedious and boring.

 On the other hand, however, IUIs could also eliminate much of the cognitive effort that has, in the past, enabled us to be fulfilled and to flourish as humans.

Technological Dependency (Continued)

 What would happen to us if we were to lose some of our cognitive capacities because of an increased dependency on cybertechnology?

 Review Scenario 12-2 (in the textbook), based on E. M. Forster’s insights about what could happen to a society that becomes too dependent on machines.

1.3 Privacy, Surveillance, and the Panopticon

 Langheinrich (2001) argues that with respect to privacy and surveillance, four features differentiate AmI from other (mostly earlier) kinds of computing applications:

i. ubiquity,

ii. invisibility,

iii. sensing,

iv. memory application.

Privacy, Surveillance, and the Panopticon (Continued)

 Langheinrich notes that because computing devices are ubiquitous or omnipresent in AmI environments, privacy threats are more pervasive in scope.

 He also notes that because computers are virtually invisible in AmI environments, it is unlikely that users will realize that computing devices are present and are being used to collect and disseminate their personal data.

Privacy, Surveillance, and the Panopticon (Continued)

 Langheinrich also believes that AmI poses a more significant threat to privacy than earlier computing technologies because:

a) sensing devices associated with IUIs may become so sophisticated that they will be able to sense (private) human emotions like fear, stress, and excitement;

b) this technology has the potential to create a memory or “life-log” – i.e., a complete record of someone’s past.

Surveillance and the Panopticon

 Čas (2004) notes that in AmI environments, no one can be sure that he or she is not being observed.

 Because of AmI environments, it may be prudent for a person to assume that information about his or her presence (at any location and at any time) is being recorded.

Surveillance and the Panopticon (Continued)

 Čas believes that it is realistic to assume that any activity (or inactivity) about us that is being monitored in an AMU environment may be used in any context in the future.

 So, people in AmI environments are subject to a virtual “panopticon.”

 Review Scenario 12-3 (in the textbook), based on Bentham’s idea of the Panopticon.

 Which AmI-related threats does it anticipate?

Table 12-1 Ambient Intelligence

Technological Components

Ethical and Social Issues Generated

Pervasive Computing Freedom and Autonomy

Ubiquitous Communication

Privacy and Surveillance

Intelligent User Interfaces (IUIs)

Technological Dependence

2. Nanotechnology and Nanocomputing

 A number of ethical and social controversies arise at the intersection of two distinct technologies that are now also converging – viz., cybertechnology and nanotechnology.

 Ruth Chadwick and Antonio Marturano (2006) argue that nanotechnology provides the “key” to technological convergence in the 21st century.

Defining Nanotechnology

 Rosalind Berne (2015) defines nanotechnology as “the study, design, and manipulation of natural phenomena, artificial phenomena, and technological phenomena at the nanometer level.”

 K. Eric Drexler, who (many believe) coined the term nanotechnology in the 1980s, describes the field as a branch of engineering dedicated to the development of extremely small electronic circuits and mechanical devices built at the molecular level of matter.

Nanotechnology and Nanocomputing

 Drexler (1991) predicted that developments in nanotechnology will result in computers at the nano-scale, no bigger in size than bacteria, called nanocomputers.

 Nanocomputers can be designed using various types of architectures.

 An electronic nanocomputer would operate in a manner similar to present-day computers, differing primarily in terms of size and scale.

Nanotechnology and Nanocomputers (continued)

 To appreciate the scale of future nanocomputers, imagine a mechanical or electronic device whose dimensions are measured in nanometers (billionths of a meter, or units of 10-9 meter).

 Merkle (2001) predicts that nano-scale computers will be able to deliver a billion billion instructions per second – i.e., a billion times faster than today’s desktop computers.

Nanotechnology and Nanocomputing (continued)

 Although still in its early stages of development, some primitive nanocomputing devices have already been tested.

 At Hewlett Packard, computer memory devices with eight platinum wires that are 40 nanometers wide on a silicon wafer have been developed (Moor and Weckert, 2004).

 Moor and Weckert note that it would take more than one thousand of these chips to be the width of a human hair.

Nanoethics: Identifying and Analyzing Ethical Issues in Nanotechnology

 Moor and Weckert (2004) believe that assessing ethical issues that arise at the nano-scale is important because of the kinds of “policy vacuums” that are raised.

 They do not argue that a separate field of applied ethics called nanoethics is necessary.

 But they make a strong case for why an analysis of ethical issues at the nano-level is now critical.

Nanoethics (Continued)

 Moor and Weckert identify three distinct kinds of ethical concerns at the nano-level that warrant analysis:

1. privacy and control;

2. longevity;

3. runaway nanobots.

Ethical Aspects of Nanotechnology: Privacy Issues

 Moor and Weckert note that we will be able to construct nano-scale information-gathering systems that can also track people.

 It will become extremely easy to put a nano- scale transmitter in a room, or onto someone’s clothing.

 Individuals may have no idea that these devices are present or that they are being monitored and tracked by them.

Ethical Aspects of Nanotechnology: Longevity Issues

 Moor and Weckert note that while many see longevity as a good thing, there could also be negative consequences.

 For example, they point out that we could have a population problem if the life expectancy of individuals were to change dramatically.

Ethical Aspects of Nanotechnology: Longevity Issues (Continued)

 Moor and Weckert also point out that if fewer children are born relative to adults, there could be a concern about the lack of new ideas and “new blood.”

 The authors also note that questions could arise with regard to how many “family sets” couples, whose lives could be extended significantly, would be allowed to have during their expanded lifetime.

Ethical Aspects of Nanotechnology: Runaway Nanobots

 Moor and Weckert further point out that when nanobots work to our benefit, they build what we desire.

 But when nanobots work incorrectly, they can build what we don’t want.

 Some critics worry that the (unintended) rep- lication of these bots could get out of hand.

 Many now refer to the replication problem (of nanobots) as the “grey goo scenario.”

Should Research/Development in Nanocomputing Be Allowed to Continue?

 Joseph Weizenbaum (1976) argued that computer science research that can have “irreversible and not entirely unforeseeable side effects” should not be undertaken.

 Bill Joy (2000) has argued that because developments in nanocomputing are threatening to make us an “endangered species,” the only realistic alternative is to limit its development.

 If Joy and others are correct about the dangers of nanotechnology, we must seriously consider whether research in this area should be limited.

Should Nanotechnology Research/ Development Be Prohibited?

Order now and get 10% discount on all orders above $50 now!!The professional are ready and willing handle your assignment.