Computer Science

 Ralph Merkle (2001) would disagree with Joy and others on liminting nano-level research.

 Merkle argues that if research in nanotechnology is prohibited, or even restricted, it will be done “underground.”

 If this happens, nano research would not be regulated by governments and by professional agencies concerned with social responsibility.

Should We Presume in Favor of Continued Nano Research?

 Weckert (2006) argues that potential disadvantages that could result from research in a particular field are not in themselves sufficient grounds for halting research.

 He suggests that there should be a presumption in favor of freedom in research.

 But Weckert also argues that it should be permissible to restrict or even forbid research where it can be clearly shown that harm is more likely than not to result from that research.

Assessing Nanotechnology Risks: Applying the Precautionary Principle

 Questions about how best to proceed in scientific research when there are concerns about harm to the public good are often examined via the Precautionary Principle.

 Weckert and Moor (2004) interpret the precautionary principle to mean the following:

If some action has a possibility of causing harm, then that action should not be undertaken or some measure should be put in its place to minimize or eliminate the potential harms.

Nanotechnology, Risk, and the Precautionary Principle (Continued)

 Weckert and Moor believe that when the precautionary principle is applied to questions about nanotechnology research and development, it needs to be analyzed in terms of three different “categories of harm”:

1) direct harm,

2) harm by misuse,

3) harm by mistake or accident.  The kinds of risks for each differ significantly.

Nanotechnology, Risk, and the Precautionary Principle (Continued)

 With respect to direct harm, Weckert and Moor analyze a scenario in which the use of nanoparticles in products could be damaging to the health of some people.

 They also note that the kinds of risks involved in direct harm are very different from those arising in the example they use to illustrate harm by misuse – i.e., developments in nano-electronics that could endanger personal privacy.

 Regarding harm by mistake or accident, Weckert and Moor describe a scenario in which nanotechnology could lead to the development of self-replicating, and thus “runaway,” nanobots. (This kind of harm will occur only if mistakes are made or accidents occur).

Nanotechnology, Risk, and the Precautionary Principle (Continued)

 Weckert and Moor argue that when assessing the risks of nanotechnology via the precautionary principle, we need to look at not only potential harms per se, but also at the relationship between “the initial action and the potential harm.”

 In their example involving direct harm, the relationship is fairly clear and straightforward: we simply need to know more about the scientific evidence for nanoparticles causing harm.

 In their case involving potential misuse of nanotechnology, e.g., in endangering personal privacy, the relationship is less clear.

 In the case of the third kind of harm, Weckert and Moor claim that we need evidence regarding the “propensity of humans to make mistakes or the propensity of accidents to happen.”

Nanotechnology, Risk, and the Precautionary Principle (Continued)

 Weckert offers the following solution or strategy: If a prima facie case can be made that some

research will likely cause harm…then the burden of proof should be on those who want the research carried out to show that it is safe.

 He also believes that there should be: …a presumption in favour of freedom until

such time a prima facie case is made that the research is dangerous. The burden of proof then shifts from those opposing the research to those supporting it. At that stage the research should not begin or be continued until a good case can be made that it is safe.

3. Autonomous Machines (AMs)

 Autonomous machines (AMs) are yet another example of an emerging technology that can have a significant ethical impact.

 AMs include any computerized system/agent/ robot that is capable of acting and making decisions independently of human oversight.

 An AM also can interact with and adapt to (changes in) its environment and it can learn (as it functions).

AMs (Continued)

 The expression “autonomous machine” includes three conceptually distinct, but sometimes overlapping, autonomous technologies:

1) (autonomous) artificial agents,

2) autonomous systems,

3) (autonomous as opposed to “tele”) robots.

 The key attribute that links together these otherwise distinct (software) programs, systems, and entities is their ability to act autonomously, or at least act independently of human intervention.

AMs (Continued): Some Examples and Applications

 An influential 2009 report by the UK’s Royal Academy of Engineering identifies various kinds of devices, entities, and systems that also fit nicely under our category of AM, which include:

 driverless transport systems (in commerce);

 unmanned vehicles in military/defense applications (e.g., “drones”);

 robots on the battlefield;

 autonomous robotic surgery devices;

 personal care support systems.

AMs (Continued): Some Examples and Applications

 Patrick Lin (2012) identifies a wide range of sectors in which AMs (or what he calls “robots”) now operate., six of which include:

 labor and service,

 military and security,

 research and education,

 entertainment,

 medical and healthcare,

 personal care and companionship.

Can an AM Be an (Artificial) Moral Agent?

 Luciano Floridi (2011) believes that AMs can be moral agents because they

 (a) are “sources of moral action”;

 (b) can cause moral harm or moral good.

 In Chapter 11, we saw that Floridi distinguished between “moral patients” (as receivers of moral act- ion) and moral agents (as sources of moral action).

 All information entities, in Floridi’s view, deserve consideration (minimally at least) as moral patients, even if they are unable to qualify as moral agents.

AMs as Moral Agents (Continued)

 Floridi also believes that autonomous AMs would qualify as moral agents because of their (moral) efficacy.

 Deborah Johnson (2006), who also believes that AMs have moral efficacy, argues that AMS qualify only as “moral entities” and not moral agents because AMs lack freedom.

 Himma (2009) argues that because these entities lack consciousness and intentionality, they cannot satisfy the conditions for moral agency.

AMs as Moral Agents: Moor’s Model

 James Moor (2006) takes a different tack in analyzing this question by focusing on various kinds of “moral impacts” that AMs can have.

 First, he notes that computers can be viewed as normative (non-moral) agents – independently of the question whether they are also moral agents – because of the “normative impacts” their actions have (irrespective of any moral impacts).

Moor’s Model (Continued)

 Moor also points out that because computers are designed for specific purposes, they can be evaluated in terms of how well, or how poorly, they perform in accomplishing the tasks they are programmed to carry out.

 For example, he considers the case of a computer program designed to play chess (such as Deep Blue) that can be evaluated normatively (independent of ethics).

Moor’s Model (Continued)

 Moor notes that some normative impacts made possible by computers can also be moral or ethical in nature.

 He argues that the consequences, and potential consequences, of (what he calls) “ethical agents” can be analyzed in terms of four levels:

1. Ethical Impact Agents,

2. Implicit Ethical Agents,

3. Explicit Ethical Agents,

4. Full Ethical Agents.

Moor’s Model (Continued)

 In Moor’s scheme:

 ethical-impact-agents (i.e., the weakest sense of moral agent) will have (at least some) ethical consequences to their acts;

 implicit-ethical-agents have some ethical considerations built into their design and “will employ some automatic ethical actions for fixed situations”;

 explicit-ethical-agents will have, or at least act as if they have, “more general principles or rules of ethical conduct that are adjusted and interpreted to fit various kinds of situations”;

 full-ethical agents “can make ethical judgments about a wide variety of situations” and in many cases can “provide some justification for them.”

Moor’s Model (Continued)

 Moor provides some examples of the first two categories:

1. An ethical-impact agent can include a “robotic camel jockey” (a technology used in Qatar to replace young boys as jockeys, and thus freeing those boys from slavery in the human trafficking business).

2. Implicit ethical agents include an airplane’s automatic pilot system and an ATM – both have built-in programming designed to prevent harm from happening to the aircraft, and to prevent ATM customers from being short-changed in financial transactions.

3. Explicit ethical agents would be able to calculate the best ethical action to take in a specific situation and would be able to make decisions when presented with ethical dilemmas.

4. Full-ethical agents have the kind of ethical features that we usually attribute to ethical agents like us (i.e., what Moor describes as “normal human adults”), including consciousness and free will.

Moor’s Model (Continued)

 Moor does not claim that either explicit- or full-ethical (artificial) agents exist or that they will be available anytime in the near term.

 But his distinctions are very helpful, as we try to understand various levels of moral agency that potentially affect AAs.

 Even if AMs may never qualify as full moral agents, Wallach and Allen (2009) believe that they can have “functional morality,” based on two key dimensions:

i. autonomy,

ii. sensitivity to ethical values.

Wallach and Allen’s Criteria for “Functional Morality” for AMs

 Wallach and Allen also note that we do not yet have systems with both high autonomy and high sensitivity.

 They point out that an autopilot is an example of a system that has significant autonomy (in a limited domain) but little sensitivity to ethical values.

 Wallach and Allen also note that ethical-decision support systems (such as those used in the medical field to assist doctors) provide decision makers with access to morally relevant information (and thus suggest a high level of sensitivity to moral values), but these systems have virtually no autonomy.

Functional Morality (Continued)

 Wallach and Allen argue that it is not necessary that AMs be moral agents in the sense that humans are.

 They believe that all we need to do is to design machines to act “as if” they are moral agents and thus “function” as such.

In What Sense of “Autonomy” are AMs Autonomous?

 Onora O’Neill (2002) describes autonomy in connect- ion with one’s “capacity to act independently.”

 Wha-Chul Son (2015) points out that that AMs (or “autonomous technologies”) can undermine “human autonomy” in both indirect and subtle ways.

 The Royal Academy’s 2009 influential report suggests that because AMs (autonomous systems) are “adapt- ive,” they exhibit some degree of “independence.”

 Floridi (2008) notes that an “adaptive” AM has a cer- tain degree of “independence from its environment.”

“Functional Autonomy” for AMs

 In so far as AMs appear to be capable of acting independently, or behave “as if” they are acting independently, it would seem that we can attribute at least some degree of autonomy to them.

 Whether AMs will ever be capable of having full autonomy, in the sense that humans can, is still debatable.

 But an AM that can act independently in the sense described above can have “functional autonomy” and thus can qualify as a “functionally autonomous AM.”

Trust and Authenticity in the Context of AMs

 What is trust in the context of AMs, and what does a trust relationship involving humans and AMs entail?

 For example, can we trust AMs to always act in our best interests, especially AMs designed in such a way that they cannot be shut down by human operators?

 We limit our discussion to two basic questions:

I. What would it mean for a human to trust an AM?

II. Why is that question important?

 First, we need to define what is meant by trust in general.

What is Trust?

 A typical dictionary, such as the American Heritage College Dictionary (4th ed. 2002), defines trust as “firm reliance on the integrity, ability, or character of a person or thing.”

 Definitions of trust that focus mainly on reliance do not help us to understand the nature of ethical trust.

 For example, I rely on my automobile engine to start today but I do not “trust” it to do so.

 Conversely, I trust some people but I cannot always rely upon them for specific tasks.

Trust (Continued)

 Because I am unable to have a trust relationship with a conventional machine such as an automobile, does it also follow that I also cannot have one with an AM?

 Or, does an AM’s ability to exhibit some level of autonomy – even if only functional autonomy – make a difference?

 Consider that I am able to trust a human because the person in whom I place my trust not only can disap- point me (or let me down) but can also betray me.

 For example, that person, as an autonomous agent, can freely elect to breach the trust I placed in her.

Trust (Continued)

 Some argue that trust also has an emotive (or “affective”) aspect, and that this may be especially important in understanding trust in the context of AMs.

 For example, Sherry Turkle (2011) raises some concerns about the role of feelings or emotions in human-machine trust relationships.

 Turklehe also worries about what can happen when machines appear to us “as if” they have feelings.

 Turkle describes a phenomenon called the “Eliza effect,” which was initially associated with a response that some users had to an interactive software program called “Eliza” (designed by Joseph Weizenbaum at MIT in the 1960s).

Trust (Continued)

 Turkle notes that the Eliza program program, which was designed to use language conversationally (and possibly pass the Turing test), solicited trust on the part of users.

 The Eliza program did this, even though it was designed in a way that tricks users.

 Although Eliza was only a (“disembodied”) software program, Turkle suggests that it could nevertheless be viewed as a “relational entity,” or what she calls a “relational artifact,” because of the way people responded to, and confided in, it.

 In this sense, she notes that Eliza seemed to have a strong emotional impact on the students who interacted with it.

 But Turkle also notes that while Eliza “elicited trust” on the part of these students, it understood nothing about them.

Trust and “Attachment” in AMs

 Turkle worries that when a machine (as a relational artifact) appears to be interested in people, it can “push our Darwinian buttons…which causes people to respond as if they were in a relationship.”

 This is especially apparent with physical AMs that are capable of facial expressions, such as (the robot) Kismet (developed in MIT’s AI Lab).

 Turkle suggests that because AMs can be designed in ways that make people feel as if a machine (like Kismet) cares about them, people can develop feelings of trust in, and attachment to, that machine.

Trust and “Authencity”

 Turkle notes that Cynthia Breazeal, one of Kismet’s designers who had also developed a “maternal connection” with this AM while she was a student at MIT, had a difficult time separating from Kismet when she left that institution.

 In Turkle’s view, this factor raises questions of both trust and authenticity

 She worries that, unlike in the past, humans must now be able to distinguish between authentic and simulated relationships.

4. Machine Ethics and (Designing) Moral Machines

 Anderson and Anderson (2011) describe machine ethics as an interdisciplinary field of research that is primarily concerned with developing ethics for machines, as opposed to developing ethics for humans who “use machines.”

 In their view, machine ethics is concerned with

giving machines ethical principles, or a procedure for discovering ways to resolve ethical dilemmas they may encounter, enabling them to function in an ethically responsible manner through their own decision making.

Machine Ethics (Continued)

 Wallach and Allen (2009) believe that one way in which the field of machine ethics has expanded upon traditional computer ethics is by asking how computers can be made into “explicit moral reasoners.”

 In their answer to this question, Wallach and Allen first draw an important distinction between “reasoning about ethics” and “ethical decision making.”

 For example, they acknowledge that even if one could build artificial systems capable of reasoning about ethics, it does not necessarily follow that these systems would be genuine “ethical decision makers.”

Machine Ethics (Continued)

 Wallach and Allen’s main interest in how AMs can be made into moral reasoners is more practical than theoretical in nature.

 Wallach and Allen also believe that the challenge of figuring out how to provide software/hardware agents with moral decision-making capabilities is urgent.

 In fact, they argue that the time to begin work on designing “moral machines” is now!

Moral Machines

 Can/should we build the kinds of “moral machines” that Wallach and Allen urge us to develop?

 The kind of moral machines that Wallach and Allen have in mind are AMs that are capable of both

a) making moral decisions;

b) acting in ways that “humans generally consider to be ethically acceptable behavior.”

 We should note that the idea of designing machines that could behave morally, i.e., with a set of moral rules embedded in them, is not entirely new.

Designing “Moral Machines”

 In the 1940s, Isaac Assimov anticipated the need for ethical rules that would guide the robots of the future.

 He then formulated his (now classic) Three Laws of Robots:

1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Designing Moral Machines (Continued)

 Numerous critics have questioned whether the three laws articulated by Asimov are adequate to meet the kinds of ethical challenges that current AMs pose.

 But relatively few of these critics have proposed clear and practical guidelines for how to embed machines with ethical instructions that would be generally acceptable to most humans.

 Susan and Michael Anderson (2011) and Wallach and Allen (2009) have each put forth some very thoughtful proposals for how this can be done.

Designing Moral Machines (Continued)

 S. Anderson argues that it would be prudent for us first to design an artificial system to function as an “ethical advisor” to humans before building a full-fledged moral machine.

 S. Anderson and M. Anderson (2011) have recommended building artificial systems with which humans can have an “ethical dialogue” before we embed machines themselves with ethical reasoning algorithms that they could use in a fully independent manner.

Designing Moral Machines (Continued)

 The Andersons have developed an “automated dialogue” – i.e., a system involving an ethicist and an artificial system that functions “more or less independently in a particular domain.”

 They believe that this is an important first step in building moral machines because it enables the artificial system to learn both:

a) the “ethically relevant features of the dilemmas it will encounter” (within that domain),

b) the appropriate prima facie duties and decision principles it will need to resolve the dilemmas.

Designing Moral Machines (Continued)

 Wallach and Allen seem far less concerned with questions about whether AMs can be full moral agents than with questions about how we can design AMs to act in ways that conform to our received notions of morally acceptable behavior.

 S. Anderson (2011) echoes this point when she notes that her primary concern also is with whether machines “can perform morally correct actions and can justify them if asked.”

Designing “Moral Machines” and the Importance of Machine Ethics

 Why is continued work on designing moral machines in particular, an in machine ethics in general, important?

 Moor (2006) proposes three reasons:

1) ethics (itself) is important;

2) future machines will likely have increased autonomy;

3) designing machines to behave ethically will help us better understand ethics.

 Moor’s third reason reinforces Wallach and Allen’s claim that developments in machine ethics could also help us to better understand our own nature as moral reasoners.

5. A “Dynamic” Ethical Framework for Guiding Research in New and Emerging Technologies

 Some of the ethical concerns affecting AMs, as well as the other new/emerging technologies that we have examined, directly impact the software engineers/programmers who design the technologies.

 But virtually everyone will be affected by these technologies in the near future.

 So, we all would benefit from clear ethical guidelines that address research/development in new and emerging technologies.

The Need for Ethical Guidelines for Emerging Technologies (Continued)

 As research began on the Human Genome Project (HGP) in the 1990s, researchers developed an ethical framework that came to be known as ELSI (Ethical, Legal, and Social Issues) to anticipate (i.e., in advance) some HGP-related ethical issues that would likely arise.

 Prior to the ELSI Program, ethics was typically “reactive” in the sense that it had followed scientific developments, rather than informing scientific research.

 For example, Moor (2004) notes that in most scientific research areas, ethics has had to play “catch up,” because guidelines were developed in response to cases where serious harm had already resulted.

The Need for Clear Ethical Guidelines for Emerging Technologies (Continued)

 What kind of ethical framework works best for research involving emerging technologies?

 Ray Kurzweil (2005) believes that an ELSI-like model should be developed and used to guide researchers working in one area of emerging technologies – nanotechnology.

 Many consider the ELSI framework to be an ideal model because it is a “proactive” (rather than a reactive) ethics framework.

The Need for Ethical Guidelines (Continued)

 But Moor (2008) is critical of the ELSI model because it employs a scheme that he calls an “ethics-first” framework.

 He believes that this kind of ethical framework has problems because:

a) it depends on a “factual determination” of the specific harms and benefits of a technology before an ethical assessment can be done;

b) in the case of nanotechnology, it is very difficult to know what the future will be.

The Need for Ethical Guidelines (Continued)

 Moor also argues that because new and emerging technologies promise “dramatic change,” it is no longer satisfactory to do “ethics as usual.”

 Instead, he claims that we need to be:

 better informed in our “ethical thinking”;

 more proactive in our “ethical action.”

The Need for Ethical Guidelines (Continued)

Order now and get 10% discount on all orders above $50 now!!The professional are ready and willing handle your assignment.

ORDER NOW »»