The business use of artificial intelligence is booming. Indeed, technology already outperforms humans for certain tasks. Consider, for example, diagnosing diseases or reporting news. However, there are a number of ethical risks associated with the use of these types of smart autonomous systems. I would like to elaborate on those here.
Professor Dr. Jürgen Angele, Head of Competence Center AI bij adesso
In recent years, there have been several technological breakthroughs due to artificial intelligence and machine learning. For example, the computer program AlphaGo managed to beat the best human player of this strategic board game and we see more and more electric cars appearing on the streets that can drive largely autonomously. But at the same time this technology also makes it possible to imitate human speech almost perfectly, and with so-called "deep fakes" videos of people can even be faked almost beyond recognition. This already shows that the technology can be used for both ethical and unethical matters. In this sense, AI is no different than a hammer: you can use it to hammer a nail, but also a skull. The danger with AI, however, lies in the autonomy of the application. Indeed, an algorithm has no ethical framework of its own. It only looks for the best or most efficient solution to a problem. And that can lead to ethically irresponsible decisions, of which humans are ultimately the victims.
One of the most telling examples of an ethical problem with AI is the self-driving car. Currently, in most countries it is still mandatory to keep your own hands on the wheel, even when the self-driving functionality is active. So you bear the responsibility for potential accidents yourself. But as technology becomes smarter and more reliable, this will undoubtedly change. The question is who will be responsible in the event of an accident. The computer? The supplier of the system? And how do insurers see this?
Imagine that a self-driving car gets into a situation where it has to swerve for pedestrians or another obstacle. In a split second, it must then decide whose life has the highest priority: that of the driver, one or more pedestrians, or perhaps an elderly person versus a child.
This is an interesting thought experiment, for which we currently have no clear solution. However, it is only the tip of the iceberg for what lies ahead in terms of future ethical AI issues in business.
In practice, we see more and more organizations investing in technology to integrate practical applications of AI into their business operations. However, the effectiveness of these systems is determined not only by the technology, but to a large extent by the data and knowledge available within the organization. The reliability and purity of the data are of great importance here, but also the way in which the algorithms are created and managed. And this is also where the ethical dilemma comes in. For example, if you leave an algorithm free to determine how an organization can best save money, it may suggest all sorts of immoral measures, which would affect people, partners and customers. To avoid this, an AI needs an ethical framework within which it operates. In addition, humans must always remain in control of decisions where there is doubt about the ethical and moral consequences. An AI system must never become a black box, making decisions that we as humans can no longer comprehend or control.
The tech giants are currently working hard to be the first to realize a universal artificial intelligence. A party like Facebook sees only benefits and seems blind to the risks, but has appearances against it due to repeated data breaches and privacy violations. In contrast there are also organizations like OpenAI, which is specifically focused on creating a path to a secure form of general artificial intelligence. Let's hope that these opposing attitudes lead to the eventual creation of a standard that legally and judicially restricts the use of AI. This would ensure that AI is always used in a safe and ethical manner. At least, in the business world... Because among international superpowers there is also currently an arms race going on around AI. With participating countries such as America, China and Russia, it is likely that we can expect a lot of fireworks on an international level. Let's hope that these countries will also conform to generally applicable ethical standards and don't indulge in doomsday scenarios from movies like The Terminator or The Matrix.
Want to learn more about the practical and ethical application of AI at companies? Then sign up for free for the Breakfast Masterclass: "Ethics and AI in your daily business" with Professor Dr. Jürgen Angele, Head of Competence Center AI at adesso.
What: Breakfast Masterclass #1: “AI in your daily business”
Who: Professor Dr. Jürgen Angele, Head of AI @ adesso
When: 11 March 2021 (9.00 – 10.00)