The issue of whether prima facie duties apply to machines
Among the algorithmic artificial agents, there is debate on the moral theory that should act as the basis. W. D. Ross’s account of prima facie duties is the answer, according to Andersons (2007). The report of Ross is said to incorporate the strengths of deontological and teleological approaches and reflect the moral deliberation complexities. Its superiority is clear by allowing for needed exceptions. On the issue concerning needed exceptions, we shall argue that Andersons may be begging. We believe that Satisficing Hedonistic Act Utilitarianism (SHAU) is the way forward. When one looks at the moral decision-making subtleties, Ross’s account is less reflective than the results that SHAU delivers initially. In regard to well-established practical cases, SHAU is seen to produce intuitively correct judgments. Similar to Andersons’ exploration in the particular healthcare incident, SHAU reaches the same verdict as a prima facie duty-based ethic.
Bynum, T. (2001). Computer and information ethics.
The paper summarizes and evaluates the AAAI Symposium on machine ethics. It aims to clarify this field (computer science and philosophy) that has been emerging of late. For the realization of the ultimate goal of ethical machine creation, it discusses various approaches that could help. According to Bynum (2001), the revolution in information has changed multiple dimensions of human life, including commerce, entertainment, transportation, security, and medicine. The author claims that the technology of communication and information has impacted humans in the wrong and useful ways. The researcher says that it has affected family life, community life, democracy, freedom, careers, and education. Bynum (2001) states that this led to information and computer ethics, which is used to analyze the ethical and social impacts of information communication and technology. Bynum (2001) highlights that computer ethics seen its use by philosophers of theories such as Virtue ethics, or virtue ethics, and utilitarianism to various ethical cases that involve computer networks and computers. According to this author, further application of computer ethics has been in reference to a type of ethics where computer ethics use good practice standards in their profession.
Bynum (2001) posits that humans are often confronted by strange ethical issues that need the application of analogies to establish conceptual bridges that are similar to circumstances experienced before. As a result, humans attempt to transfer ethical or moral intuitions over the bridge from the experiences to the current circumstance. However, the lack of an efficient analogy leads them to realize new or different moral values and develop fresh ethical principles or policies and find other ways of thinking concerning emerging issues. Similarly, computers or machines result in new versions and dimensions of moral standards, which can exacerbate old problems, forcing humans to use old or conventional moral norms to solve the issues.
According to Bynum (2001), despite the ethical issues posed by machines or the computer, new technologies of information and communication offer humans new methods to instrument their actions. Computer technology has also led to new ethics questions concerning intellectual or ownership rights and human privacy, which can always be solved with the help of conventional ethical theories. This researcher notes that the computer technology’s revolutionary power shows how the machines are malleable logically in that they can be molded and shaped to do activities characterized by outputs and outputs, and linking them to logical operations. Since logic exists everywhere, the likely use of computer technology, according to Bynum (2001), is limitless.
This researcher says that the computer is one thing in the world that is close to a universal tool and that the limits of this machine reflects the human limits, especially of their creativity. According to Bynum (2001), computer technology’s logical malleability allows people to do numerous different things that they could not do in the past. Since humans could not do some things in the past, there were no standards or lows of behavior, good practice, or ethical rules to govern the activities. This author concludes that even though computers provide humans with the ability to do what they could not before, there is a policy gap on how such existing situations should be conducted and regulated. As a result, questions arise about whether or not similar rules or policies used to control human behavior and how they do their activities apply to computers and machines.
Skelton, A. (2012, March 21). “William David Ross,” the Stanford Encyclopedia of Philosophy (summer 2012 Edition), Edward N. Zalta (Ed.). Retrieved February 23, 2020, from https://plato.stanford.edu/archives/sum2012/entries/william-david-ross/
The article describes W. D. Ross’s ethical approaches to ethics. It looks at the complexities of ethical standards deliberation. According to Skelton (2012), Ross is a moral realist, a non-consequentialist, and a pluralist who holds, intuitively, moral truths. This author highlights the claims by William David Ross as follows: That beneficence is a responsibility or duty since some people live in conditions that others can help to make better or improve.
On the other hand, Ross also believes that benevolence is half-dozen duties since he is non-committal concerning specific numbers. According to Ross, agents should pay more attention to the half-dozen duties when deciding on what they should do, including fidelity (which are generated by the human promises), non-maleficence (that is based on the requirement or need not harm other people), gratitude (that is produced by the activities which benefit us but which are done by others), self-improvement and justice which is generated by the need to distribute goods equitably. According to Ross, such duties can conflict. For example, the responsibility of another person or a practitioner to act on a patient’s behalf and welfare can sometimes collide with the duty of the individual acting on the patient’s behalf to respect the sick person’s autonomy. However, Ross also points out that at some point, one duty will overcome or override other duties, supplying an agent with absolute obligations to act on a particular duty, a situation or theory that is known as Prima Facie Duties. According to Ross, when prima facie duties are conflicting with other responsibilities, humans often assess them by how they fit with unethical or non-moral intuitions, background scientific evidence, and ethical theories.
Additionally, in conflicting situations, humans often find a duty that is absolutely necessary to be acted upon. This prima facie duties system works efficiently when humans have limited time to act and under uncertainties. This shows that artificial agents or machines must first be programmed to make their decisions in line with the prima facie duties to allow them to reflect on the sophisticated nature of moral arguments or deliberations.
Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics. Oxford
University Press, USA.
A process in which indeterminateness of general norms is reduced is known as specification. While retaining the original norm’s moral commitment, it gives them an improved action-guiding capacity. According to Beauchamp and Childress (2001), machines should follow a specific procedure when deciding the best action or course to take. These researchers postulated certain canonical principles that machines must adhere to. Beauchamp and Childress (2004) pointed out four ethical rules or norms which machines must satisfy in their duties. First, they must adhere t the principle or norm of autonomy. Second, they must consider and follow Non-maleficence. Third, the machines must be able to follow the Beneficence principle. Lastly, the machines must be just or follow the justice principle when acting on their duties. Respecting autonomy requires that the machines will not interfere unduly with a person’s sense of feeling or being in charge of their situation. Non-maleficence, on the other hand, need that an agent will not violate the psychological sense of a person’s identity or bodily integrity. These reasons constitute the requirement that a machine shall not bother a person in any unnecessary way. Beauchamp and Childress (2001) claim that a machine should also help improve or take care of a person’s beneficence or welfare before any harm is done.
Problem Statement
The human attempt to develop ethics for machines is faced with the challenge of determining how to convert numbers to moral or ethical values. Nonetheless, when machines have all necessary data or information, it is possible that it can arrive at the desired decisions, much reliable and faster than humans. Humans do not calculate accurately, like machines. Humans are also likely to favor their loved ones or themselves, thereby making their decisions biased when undertaking duties, unlike machines that lack prejudices. Humans also grow tired in their deliberations and often take short-cuts before considering all the important variables while machines do not have such shortcomings. However, even though machines calculate universally, objectively, and accurately, there are still some issues of whether the policies that apply to humans concerning regulating conduct or behavior should apply to machines.
Aim of the Study
This research, therefore, aims to find out the issue of whether prima facie duties apply to machines.