In this essay, I will examine the definition of Artificial Intelligence (AI) and argue that it ought to be treated as a value-neutral tool, rather than an independent decision-maker. Furthermore, I will argue that if it is utilised as a tool, it can maximise human functioning whilst furthering the implementation of shared ethical values in keeping with Aristotelian eudaimonia.
The Oxford English Dictionary defines Artificial Intelligence as "the theory and development of computer systems able to perform tasks that normally require human intelligence." Most AI applications belong to a subdomain of machine learning, which seeks to imitate human cognitive tasks by analysing data and creating individual statistical models, from which it can derive information on task performance or new data processing. Whilst the goal of the AI paradigm is to shift from a mere "symbolic" duplicate of human-like intelligence into a "connectionist" model, in which intelligence is produced via artificial neurons but otherwise functions just like its biological counterpart — that shift is still a distant future away (if at all attainable).
However, the language colloquially used to describe AI's functional model is anthropomorphic. We often hear about an AI system that "recognises X" or an AI bot that "chats" with you. This language is technically incorrect. This behaviour would align with what John Searle coined as the "Strong AI" theory. Strong AI believers claim that "the appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." It follows then, that if human beings with developed minds are regarded as independent thinkers by the law, then the same sort of treatment ought to be granted to the AI systems. In the Strong AI theory, AI systems ought to be recognised and treated as independent thinkers, and thus have the right to make independent decisions, not as mere tools but as agents in their own right.
For now, the theory of Strong AI fails. Systems like Siri and Alexa can link spoken commands with desired outputs — e.g. tell you what temperature it is outside. Still, it is untrue to claim that either Siri or Alexa can "chat" or have a real understanding of the words being used. This is best illustrated in Searle's Chinese room argument. The Chinese room is a thought experiment which imagines a human protagonist in a room. She receives Chinese characters through a pigeonhole, sorts them in line with the program's instructions, and produces an output of Chinese characters, arranged in a manner which would form a coherent sentence for a Chinese speaker. However, unlike the Chinese speaker, the protagonist would lack understanding of the content of the Chinese writing that she just produced. In a similar way, an AI system matches input with output according to the instructions that it was given by a coder — but much the same way that the human protagonist in the thought experiment would be unable to understand the sentence she just created, an AI system has no real understanding of any output or a decision that it makes.
An AI system follows step-by-step instructions which make it appear as if it is demonstrating intelligent conversation or making a decision based on practical sagacity. Moreover, compared with a human mind, there seems to be something distinctively different in how we process and intellectualise data, which Searle refers to as "intentionality." This concept is similar to that of Aristotle's energia — the distinct quality of human function is that every action is not based on a mere disposition to act (akin to the AI's input-instruction-output fashion), but that each action is predicated upon a deliberate activity of the rational soul. For now, a machine is unable to possess intentionality and thus it cannot think or have a mind in a human-like way. So why should we implement it at all?
Aristotle describes happiness as eudaimonia, and virtue plays a pivotal role in its attainment. Achieving eudaimonia relies on the agent's habit of character — ethos — and decision-making which relies on phronesis (practical wisdom), forming a will. A will underpins voluntary human action, leading to the desired outcome. A life of satisfied desired outcomes demonstrating moral, intellectual, and physical excellence, is a life of a eudaemon agent according to Aristotle. Moreover, a eudaemon agent possesses the required attributes: they are a good technician, so they are capable of performing tasks in the most perfect way according to their disposition — a good runner runs the fastest; and they possess a sum of perfectly functioning natural parts — well-seeing eyes, thoroughly-chewing molars and so on. In antiquity, a eudaemon Aristotelian agent lived in accordance with their teleological condition and accomplished their humanity by demonstrating moral, intellectual, and physical excellence during their life.
In the 21st century, technology makes it apparent that humans are limited by their physiology. Even the fastest speed reader would be unable to have the processing powers of an AI system, which can access countless databases with a snap of a finger. Technological incorporation into our lives will allow humans to excel in both attributes which are necessary for the attainment of eudaimonia. Technological incorporation can make us better technicians — if a good lawyer is a lawyer who can win most of their court cases, the best lawyer can win them all with the ability to find the most applicable case by instantly processing every legal database. If a eudaemon agent possesses perfectly functioning natural parts, imagine if they could incorporate vision which could help them make safer choices (e.g. glasses which utilise AI to scan the road before crossing). I argue that a modern Aristotelian agent must embrace AI systems in order to become more virtuous.
Functionally, a shift is already taking place in society. AI systems provide cheap and tireless labour, which, for many profit-oriented companies, is a sufficient justification for the implementation of AI decision-makers into our daily lives. Intelligent software is already used for many former human activities, such as catching speeders and monitoring hate speech online. When it comes to decision-making, AI already plays an important role in hiring employees, accepting students into universities, as well as sentencing and granting bail. Machines are also value neutral. As they do not think or have a human-like mind, they are also unable to possess any predisposition towards favouring one group over another, unless they are expressly instructed to do so. As our society realises the importance of equity, there is a strong pull towards implementing technology that cannot be affected by chauvinistic or racist convictions and can ensure just hiring, firing, and sentencing practices. This branch of AI is known as "affective computing" and it is centred around promulgating shared ethics. But what happens when AI decision-makers produce unethical outcomes?
Amazon's intelligent hiring software was scrapped due to its unfavourable outcome against women, which is an ostensible contradiction to the argument above. However, the result produced by the AI system is not the fault of the system per se, but rather Amazon's own hiring data, which when processed by a machine, shows a discriminatory trend. A machine cannot add or reduce bias — it only learns from the data it is provided. This is good news for AI. Amazon's software put a spotlight on the implicit bias in the hiring department, work culture, and general attitudes in tech which left women and femmes underrepresented. This did not make the AI system lack virtue (at least teleologically) or suggest that its implementation will go against the shared ethical goal. Sure, if left unsupervised and fed faulty data, AI systems will yield absurd results. But if treated as a tool, both for decision-making as well as data analysis, it can radically shift the asymmetries between the kind of practices a virtuous society should aspire to adhere to vs. those that have been implemented historically. Moreover, this is an example of how an AI system on its own is unable to ethically self-correct, but human presence will add "moral reasoning, sensitivity to evolving norms, or a pragmatic assessment of what works." The human-machine hybrid is the most effective: with the speed and processing power of AI and the moral intuition of a human, the collaborative effort will result in the most virtuous outcome.
Artificial Intelligence systems can act as useful tools due to their processing abilities as decision-makers. They can also highlight urgent problems within various fields that require unbiased decision-making. Technological advance allows us to maximise our own capabilities as virtuous agents, by becoming technically and comprehensively superior. It is important to recognise that an AI decision-maker may generate undesirable results if fed improper data, and therefore must not be left to make decisions on its own. However, the presence of a human who can effectively utilise it as a tool will create a union which functions in accordance with eudemonic pursuit and render society more virtuous.