In today’s world, artificial intelligence (AI) is quickly becoming a game changer. From chatbots to self-driving cars to virtual assistants, AI is everywhere, promising convenience and incredible advancements. But as this powerful technology grows, what will it mean for our future?

Not long ago, AI mostly existed in the realm of science-fiction. The past six months or so have shown this is no longer the case. AI has the potential to transform industries and solve complex problems, while raising interesting questions. How transparent are AI systems? Are they fair and safe? And do they respect our privacy? 

But due to the rapid development and introduction of AI into our daily lives, there are also deeper, existential questions we need to consider. Here we outline a way to approach the ethical implications of AI, an issue set to become increasingly important in the months and years ahead. First though, we need to understand what it actually means.

What is AI?

Artificial intelligence is a technology that mimics human learning to perform tasks in order to reach a given objective. It's like a computer system that can learn on its own. 

Much of the recent hype has focused around generative-AI, a system or application that creates content in response to a prompt. Recently launched tools like ChatGPT and MidJourney, which have the ability to converse and generate new text and images, have the potential to drastically alter ways in which society operates.

The exciting thing about AI in general is its potential to bring about significant positive changes in how we live and work. It has real-world applications in healthcare, finance, transport, and elsewhere. But it's important to understand this potential, the limitations, and our related concerns, to ensure it is used ethically and responsibly. 

What are our concerns?

One concern surrounds our value to society. AI technology can now perform tasks that were traditionally done by humans, leading to the automation of certain jobs. This shift can result in job losses or changes in job requirements. Tasks like data entry or customer support, for example, can now be handled by AI-powered systems.

At the same time, AI also creates new job opportunities. It can augment human capabilities, allowing us to focus on more complex and creative tasks. But even jobs that require critical thinking, problem-solving, and emotional intelligence — largely unaffected today — may be at risk in the future. 

Due to the speed at which AI is being developed, this will soon set off a series of broader concerns about how we live our lives. Therefore, we are tasked with imagining a future for humanity where much of the tedious work — even much of the intelligent work — will be done for us by AI.

For example, if an AI system comes along one day capable of completing human tasks, then what do we do for our work and even our leisure time? We’ll need to ask “what is leisure?” Or “what is fulfilling work?” When robots become autonomous and do all the caretaking and all the giving, then how do we think about the future of voluntary service? 

These questions lie outside the domain of technology. They are ethical questions: how do we as humans understand our preferences — our aspirations or goals for what we want the future to look like, what kind of society we want for our children, and how these aspirations might change over the next 10 or 20 years? 

How can we address these concerns?

When it comes to understanding our own preferences, we need to rely on philosophy, ethics, and faith.

Philosophy and ethics help determine what is right and wrong. In the context of AI, they can help us consider the potential consequences, fairness, and the impact on human (and non-human) well-being. By applying the correct principles, we can ensure that AI is used responsibly and in a way that benefits society as a whole.

Faith, meanwhile, provides a moral framework that shapes our understanding of human values and the purpose of life. The faith of Islam, for example, teaches about compassion, empathy, and the dignity of every human being. Integrating AI into our lives in line with such values ensures we respect and preserve our fundamental beliefs while utilising technology for the betterment of humanity.

Therefore, AI systems should be designed and deployed in a way that respects our fundamental rights. But these rights, some experts say, could be under threat — one day, they suggest via several scenarios, AI could potentially cause harm to humans. 

Since AI is basically a system which efficiently achieves a given objective, whatever objective you ask for, it will come up with an optimal way to address it — even if it goes against our interests. Imagine, for example, that someone gives an AI agent the objective to “clean the oceans.” In order to most effectively achieve the objective, one solution it might identify is to get rid of oxygen. If the agent was autonomous, it would begin immediately doing so.

The AI agent would not intentionally be harming humans and other animals – because it doesn't know our preferences. Because we haven't given that system enough knowledge to be able to understand our preferences. But how do we give the AI system that knowledge, when we ourselves don't know our preferences?

A good AI system would be better if it carried an element of doubt. In order to complete an action, it would ask or check first with an expert (or group of experts), what to do. And in order to have such a system, we would need governments and bodies to write regulations stating that whoever develops AI, should develop it with this framework in mind, where preferences are not assumed.

Working together to imagine the future

This would involve governing authorities working together with technology companies in a kind of public-private partnership, to find the right balance between encouraging innovation and implementing responsible regulation. This would foster creativity and progress, while ensuring that AI is developed and used ethically.

Just as electrical appliances, cars, and medicines are tested and regulated, similar standards should be applied to AI, and to technology companies in general. It will require collaboration between policymakers, technology experts, and other stakeholders, including the general public, to develop flexible and adaptive guardrails and frameworks to keep us safe. 

Engaging the public allows for diverse voices, open dialogue, and understanding of our concerns, hopes, and expectations regarding AI. An inclusive approach ensures that the benefits and risks of AI are considered and that its impact aligns with the values and well-being of society as a whole. But it's a pressing issue.

In a way, it's a similar conundrum to climate change. A few years ago, we thought we had plenty of time and that the deadline was still far away. But now we know that the deadline has in fact passed, and we need to take collective and immediate action — everyone has a role to play.

As such, we need to coordinate and begin a dialogue between all sections of society — academics, religious communities, computer scientists, economists — everyone needs to come together to imagine the future we’d like to see for ourselves and our children. And we need a sense of urgency.

With AI, just as with climate change, time is of the essence.