Morality, AI and the Future of Humanity
Two pressing questions shape our time:
- What is the moral state of humanity?
- How powerful will Artificial Intelligence (AI) become?
At first glance, these questions may appear unrelated. But allow me to explain.
Think about international news. Talk of war and genocide dominates the headlines. World powers are at each other’s throats, and geopolitical tensions are rising. Countries are thoughtfully reminding each other of their nuclear capabilities.
At the same time, online algorithms sort people into carefully curated echo-chambers. We end up holding such opposing views that it’s hard to believe we are observing the same reality. Depending on your digital environment, the world is either in crisis or on the brink of a new golden age. People live in either fear or excitement, often centred around two Americans who have seemingly unfettered powers.
With all of this happening, it is difficult to tell – are the dominant forces in our world primarily good or bad?
To my mind, if it is good, it will lead to better outcomes for all of humanity. If it is bad, it will result in greater instability and negative consequences. But what if it is good for some and bad for others? Is the responsibility of the US president not merely to do what is good for the US’ people – even if that comes at a cost to the rest of the world?
Against such a backdrop, ethics is a useful tool for navigating these big questions. In its simplest form, ethics suggests that we should consider not only what is good for ourselves, but also what is good for others, so that by doing so, we will create a more just world that is collectively good for everybody.
Some may believe that such a world is merely a utopian fiction, and that each country should only fight for the good of its own people. Many world leaders today seem to embrace this view. But if we accept this logic, why stop at the national level? Should different social groupings not also oppose each other in pure self-interest? And where would it end?
The universalizability principle of ethics offers useful guidance. It asks: ‘what would the world look like if everyone acted this way? And would we want to live in such a world?’ The answer often reveals that ruthless short-term self-interest can be counterproductive in the long term – even to oneself.
These questions may seem abstract and unimportant, but we are faced with an unprecedented urgency to decide our moral position.
The reason for this urgency is that we are training AI on a scale that we have never done before. We are, in a sense, creating AI ‘in our own image’. It will inevitably be shaped by our current biases and our moral beliefs – whether we recognise them or not. The foundations that we lay today will be projected exponentially into the future.
AI is still in its infancy compared to where it will be fifty or a hundred years from now. (To get an inkling of what this might mean, one merely has to think of the progress of the mobile phone in the last 20 years.)
There’s no doubt that AI will play an ever-greater role in our lives. It already influences hiring decisions and who gets access to certain financial services. It likely has an impact on our very identity by determining which online echo chamber we are categorised into. We cannot possibly predict the future uses of AI, nor the full extent of its potential impact on humanity.
Whether that potential impact is positive or negative depends on how we train AI. While we may not think of ourselves as AI trainers, we do form part of the society that serves as a role model for AI. AI will largely mirror what it gets from us.
We get to choose whether we model an expansive definition of ethics (i.e. considering what is good for humanity) or whether we model a more restricted definition, considering only what is good for our own small group of people. It is therefore crucial that we reflect on the question: ‘what kind of world would we be creating if AI adopted my beliefs?’
As we continue to develop AI, we should be sure that we – and future generations – can live with the consequences of the values we embed in it today.

Kris Dobie is Senior Manager: Organisational Ethics at The Ethics Institute.

