The recent release of ChatGPT was met with such enthusiasm that the system is regularly unable to cope with the massive user demand for its service. However, the system did not only introduce a useful tool that people enjoy, but it also opened or reintroduced a whole array of ethical issues and questions related to the use of artificial intelligence (AI).
By way of background, Chat GPT is a language processing system, also called a Large Language Model or LLM, that can generate logically coherent and (in most cases) meaningful and well-written responses to questions posed online. It is also available for free, at least until now. It was developed by an artificial intelligence (AI) service provider by the name of OpenAI. This system is currently still in an experimental phase.
The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer” which indicates that the system was taught or trained by being fed massive amounts of human-generated text that the system can now process when responding with written responses to questions posed to it. It is thus machine learning that enabled the system to respond intelligently and in full sentences to questions that are being asked by users.
Over the last weeks, several testimonies related to the usefulness of ChatGPT have been published in mainline and social media. Probably the clearest indicator of the usefulness of the system is that it pushed Google to issue a red alert about the business continuity risk that ChatGPT poses to the company. The fact that ChatGPT responds to questions with a well-tailored and unique answer and not only with links to websites where one must search for answers makes the business risk to Google obvious.
The relationship between ethics, ChatGPT, and AI in general, is an interesting and intriguing one.
Ethics and AI
ChatGPT can be a most useful companion for anyone interested in ethics. It can provide one with informative answers about, for example, what ethics is, the nature of moral dilemmas, the different sources and traditions of ethical principles, ethical decision-making models, the ethical pitfalls of AI, and a lot more. Although the system claims that it cannot make ethical decisions or resolve moral dilemmas, it nevertheless responded quite impressively when I asked it to advise me on what the best course of action would be in specific ethical dilemma scenarios. It also provided guidance on particular issues to consider when resolving a specific ethical dilemma. ChatGPT can be a handy tool for gaining more insight into ethics.
Ethics of AI
Technology is without exception ethically ambiguous. It can be put to good use, but it can also be used to harm innocent victims. Online banking, for example, can make one’s life much easier, which is a good thing. It can, unfortunately, also be used to defraud online banking users, which is unethical and illegal. AI and specifically ChatGPT are no exceptions in this regard. They are also ethically ambiguous. When I asked ChatGPT about its ethical ambiguity, it was quick to admit that it was guilty as charged.
All AI, including ChatGPT, suffers from the potential danger of data bias. The quality and relevance of an AI-generated response crucially depend on the data the system has access to. Very often the data that AI systems use to compute answers to questions is biased in some or another way. It can be gender, racial, or geographically biased. It can also be biased in terms of the selection of data that the system is taught, the algorithms that are used, or the decision-making rules that are programmed into the system.
The data bias that is very often built into AI systems can lead to unfair discrimination against people of a specific race, gender, sexual orientation, economic class, or geographic region. When this happens, the offending parties are usually very quick to blame the system for these lapses. But the system can only use the data it is exposed to, and developers of AI systems surely have a say in which data their systems can access, and in how the data will be processed.
A particular ethical problem that ChatGPT brings to the fore is plagiarism. Students, academics, journalists, and a host of other professions in which the generation of written text is required, can easily be tempted to ask this new chatbot to produce the required text and then present it as their work. Plagiarism is a form of intellectual fraud as it consists of deceiving someone else with what you present to them as your work, while in fact, it originated from another person, source, or AI system.
The leading academic journal, Science, recently placed a ban on ChatGPT being cited as a co-author in academic articles. Some schools in the USA also blocked access to ChatGPT on their school systems. Universities have been struggling with the scourge of plagiarism for decades and have found some useful software where the work of students and academics can be tested for its originality. But ChatGPT has raised the bar of being caught for plagiarism, as it does not merely copy and paste from other existing knowledge and information resources, but generates unique written responses that are hard to detect with existing anti-plagiarism software.
There are a host of other ethical concerns about the use of AI in general and ChatGPT in particular. If you don’t believe me, just ask ChatGPT!
Since ChatGPT has been found to be morally ambiguous, as was argued above, what can be done about the dark side of this new chatbot? I will only reflect on the two dark sides of ChatGPT discussed above: bias and plagiarism.
What can be done?
The bias built into AI systems that result from the selective use of data that AI systems use to generate solutions is something that can be addressed through human interventions. It is imperative that AI systems should be continuously monitored and audited for whether some form of bias is built into them. Users of such systems should also be provided with a feedback mechanism to report bias or unfair discrimination that they detected. But such corrective measures should not only be retrospectively applied but also prospectively.
Ethics experts should be part of the design of AI systems to ensure that there is a focused approach on the prevention of bias in the design of AI systems. Autonomous self-learning AI systems might change this dynamic, which raises a serious question about the ethical desirability of such autonomous systems.
Tackling the potential proliferation of plagiarism as a result of AI solutions like ChatGPT is another ethical imperative. Detecting plagiarism in the age of AI could probably only be achieved with the aid of AI. There are already several software solutions for detecting AI-generated text available. However, until such programs have proved themselves to be sufficiently reliable and accurate, we probably have few other choices than to focus on the personal and professional integrity of persons who must generate original text as part of their role responsibility, like students, academics, and journalists.
It is thus hugely important that academic and professional ethical integrity should be cultivated in the age of AI. This can be done through extrinsic motivation where people are being made aware of the dire consequences of plagiarism through the use of AI. However, such an approach is tiresome and difficult, as the perpetrators first have to be caught out before the dire consequences can be meted out.
Consequently, old-fashioned personal and professional integrity where people identify with the ethical standards of their institutions or professions remains a crucial defence against the new AI-inspired scourge of plagiarism.
About the author: Prof Deon Rossouw is the CEO of The Ethics Institute and an Extraordinary Professor in Philosophy at Stellenbosch University.