Artificial Intelligence Partials

Why Ethics in Artificial Intelligence Matters

For all the talk about artificial intelligence (AI) transforming how businesses operate, compete, and grow, there are still many questions that need to be asked and answered if we are to truly unleash the power of this still new technology. It’s true that we’re a long way off from ultra-powerful (and potentially malevolent) AIs like Skynet in the Terminator series, the amazingly self-aware and manipulative heroine of Ex Machina, or any other depiction of AI in pop culture. But that is precisely why now is the best time to consider the ethics of AI and define guidelines to follow, so that as AI programs develop and become more sophisticated they are not inadvertently endangering our very existence—at least the one that we know today.
To that end, let’s take a closer look at some current thinking around AI ethics and how they can be put into practice in today’s world.

What is AI ethics?

Generally speaking, AI ethics deals with the moral behavior of artificial intelligence systems. It is intended to ascribe a set of rules and/or guidelines to how we design and construct AI systems so that they are not used to harm humans.
AI ethics has its roots in science fiction author Isaac Asimov’s Three Laws of Robotics, which state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, what constitutes “harm” can be a bit of a gray area. Yes, AI systems should not physically hurt or endanger humans. But is it harmful for AI systems to deliver information about products that are personalized to an individual? Is it harmful to use AI-driven facial recognition software to verify a person’s identity? The key to understanding what constitutes harm may lie in the context under which an AI is developed and deployed. It’s important to note that there may be contexts where the intention of a deployed AI may be good, but it may have unintended consequences.
This has led to the concept of “Trustworthy AI,” a term defined by the European Commission as AI systems that are “human-centric” with “with the goal of improving human welfare and freedom.” The EU Commission recently developed and issued its own set of ethics guidelines that further define this concept with three key components:

  1. It [AI] should be lawful, complying with all applicable laws and regulations;
  2. It should be ethical, ensuring adherence to ethical principles and values; and
  3. It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

The Commission points out that each of these three components are certainly implementable in and of themselves, but only in their combination can we achieve AI that is deemed “trustworthy.”

Application of AI Ethics in Business

While this sounds a bit abstract and possibly lofty, there have already been real instances where AI systems have violated these guidelines. One example is where an MIT test of three commercially available facial recognition systems only worked accurately for white males, with its accuracy plummeting to only 35 percent for people of color and women. Another example is when Twitter taught an AI chatbot to be racist. Other examples like these can be found regularly.

For businesses looking to implement AI, it’s essential to develop a program that includes a commitment to ethical behaviors and usages of AI systems. That includes developing policies that define a company’s approach, what the parameters are, and what the consequences may be should they be violated. A good example is Microsoft’s own statement of principles for AI. At the very least, companies implementing AI programs need to set up an audit system to provide regular oversight and ensure they are not inadvertently (or purposefully) harming humans.

If your company is looking to explore the benefits of AI, we can help. Contact us to discuss both potential benefits and strategies for avoiding unethical implementations.

Best regards,
Florian Horn

Your data will be collected and processed in accordance with our privacy policy.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *