29. January 2021 By Jürgen Angele
How important are ethics in applied artificial intelligence?
The business use of artificial intelligence is booming. The technology already outperforms humans in certain tasks. Think, for example, of diagnosing diseases or reporting news. However, there are a number of ethical risks associated with the use of these kinds of smart autonomous systems. I would like to elaborate on that here.
In recent years, there have been several technological breakthroughs due to artificial intelligence and machine learning. For example, the computer programme AlphaGo succeeded in beating the best human player of this strategic board game, and we are seeing more and more electric cars appear on the street scene that can drive largely autonomously. But at the same time, this technology also makes it possible to imitate human speech almost perfectly, and with so-called 'deep fakes', videos of people can even be faked almost beyond recognition. This already shows that the technology can be used for both ethical and unethical matters. In this sense, AI is not unlike a hammer: you can hammer a nail with it, but also a skull. The danger with AI, however, lies in the autonomy of the application. An algorithm has no ethical framework of its own. It only looks for the best or most efficient solution to a problem. And that can lead to ethically irresponsible decisions, of which humans are ultimately the victims.
One of the most striking examples of an ethical problem with AI is the self-driving car. Currently, in most countries it is still mandatory to keep your hands on the wheel, even when the self-driving functionality is active. So, you bear the responsibility for possible accidents. But as technology becomes smarter and more reliable, this will undoubtedly change. The question is, who will be responsible in the event of an accident? The computer? The supplier of the system? And how do insurers see this?
Imagine that a self-driving car gets into a situation where it has to swerve to avoid a pedestrian or another obstacle. In a split second, it must be decided whose life has the highest priority: that of the driver, one or more pedestrians, or perhaps an elderly person versus a child.
This is an interesting thought experiment, for which we currently have no clear solution. However, it is only the tip of the iceberg for what lies ahead in terms of future ethical AI issues in business.
In practice, we see that more and more organisations are investing in technology to integrate practical applications of AI into their business operations. However, the effectiveness of these systems is not only determined by the technology, but to a large extent by the available data and knowledge within the organisation. The reliability and purity of the data are of great importance here, but also the way in which the algorithms are created and managed. And this is also where the ethical dilemma comes into play. For example, if you give an algorithm the freedom to determine how an organisation can best save money, it can suggest all kinds of immoral measures that would affect people, partners and customers. To prevent this, an AI needs an ethical framework within which it operates. In addition, humans must always remain in control of decisions where there is doubt about the ethical and moral consequences. An AI system must never become a black box, making decisions that we humans can no longer understand or control.
International AI race
The tech giants are currently working hard to be the first to realise a universal artificial intelligence. A party like Facebook only sees advantages and seems blind to the risks, but has the illusion of doing so because of repeated data leaks and breaches in the area of privacy protection. On the other hand, there is an organisation like OpenAI, which focuses specifically on creating a path to a safe form of general artificial intelligence. Let's hope that these opposing attitudes will eventually lead to a standard that legally and judicially restricts the use of AI. This would ensure that AI is always used in a safe and ethical way. At least, in the business world...
Because among international superpowers there is also an arms race going on around AI at the moment. With participating countries like America, China and Russia, it is likely that we can expect a lot of fireworks at this international level. Let's hope that these countries also conform to generally applicable ethical standards and do not allow themselves to be seduced by doomsday scenarios from films like The Terminator or The Matrix.