Ethical AI. What does it mean? Why is it important? Why is it so difficult to do?

July 8, 2020

Young people increasingly live in a world transformed by artificial intelligence and those who understand and leverage it will live richer and more fulfilling lives. This article provides a framework and an interactive experiment for teachers and parents to begin having a conversation about ethical AI. You can learn more by clicking on any of the links for additional context and information.

What does ethical AI mean?

Ethical AI is composed of two words, ethical and AI. Ethics means the moral principles that govern the behavior and actions of a group or individual. AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence. One way of defining Ethical AI is to ask how right, how fair and how just is the AI’s output, outcome and impact?

AI can produce biased results if it is trained with biased data or if it is developed in a way that doesn’t avoid bias.

How can biased data create AI that isn’t ethical? As one example, in 2016, Microsoft released a new chatbot called Tay on Twitter. Tay was designed to learn by engaging people in dialogue through tweets or direct messages. Trolls quickly inundated Tay with misogynistic, racist and anti-Semitic messages (biased data) and within hours, taught Tay to give vile and repulsive responses. Twitter users became outraged by Tay’s toxic responses and in less than 24 hours, Microsoft silenced Tay forever.

Besides using unbiased data, ethical AI products require the right learning model (which means the right math is used to determine the way the AI works so that algorithmic bias isn’t inadvertently created) for the problem and for its performance to be monitored to evaluate if the results it produces is right and fair.

Why is ethical AI important?

Few people care if the AI in a program that draws cats is ethical. However, when AI is used in sectors such as medicine, law enforcement, recruiting, data privacy, military defense or self-driving vehicles, the AI must produce transparent and understandable results that reflect the ethical standards and norms of our society.

As examples, AI that’s not ethical could result in:

  • Biased law enforcement or job candidate recruiting producing discriminatory results
  • AI products that erode people’s privacy or misuse their data in unintended ways
  • Military defense or self-driving vehicles where the AI makes decisions that are impossible for people to understand and to access liability for damages when harm is caused.

Why is it so difficult to develop ethical AI?

We all know the difference between right and wrong, so why is it difficult to develop ethical AI? This turns out to be much harder than it may appear because the data scientists developing the AI must build it so that it makes decisions that are moral and acceptable to society in a complex world where the unexpected can happen.

An ethical AI design experiment - what would you do?

Let’s pretend that you’re a data scientist who is developing AI for a self-driving car and a self-driving trolley and both of them will use your AI to make driving decisions. Now, click on the link and watch a one-minute YouTube video, The Trolley Problem.

How would you design the self-driving trolly’s AI? Would you build it so that the trolly stays on its tracks and kills six people? Or instead would you design it to switch to a second track and kill only one person? What’s righter and fairer?

Would your decision be different if the one person on the second track were a close family member, such as your brother or sister? Would your answer change if a mom and her three little children were on the second track? These are difficult decisions. There’s no easy answer and it’s an example of the type of ethical decisions that data scientists make.

Finally, watch a four-minute YouTube video to learn more about The ethical dilemma of self-driving cars.

How are companies determining if they are building ethical AI?

An increasing number of technology companies are designing guidelines, developing tools and establishing governance teams to address ethical issues such as a lack of transparency and bias. Google has developed seven principles which address bias, safety and accountability to assess if the AI adheres to ethical standards. Microsoft specifies six AI principals be used when developing trustworthy AI including transparency, privacy and fairness.

Additional resources for teachers and parents

  • Learn how your Trolly Problem answers compare with others in an interactive game by clicking on MIT’s Moral Machine and then on the “Start Judging” button to keep learning
  • An infographic to help continue the conversation – Can AI Think Ethically?
  • An article that offers a more detailed discussion about ethical AI – Ethical AI

This is the fourth in a series of articles written for FOSI by AI Literacy about how parents and teachers can engage with their children and students to discuss AI. AI Literacy believes that the responsible and thoughtful use of AI will lead to a bright future for mankind. Learn more about AI by visiting www.ailiteracy.org.

Written by

Daniel Kent


Daniel Kent is the founder and CEO of Net Literacy, an all-volunteer, student-run nonprofit that bridges the digital divide through its digital literacy and digital inclusion programs. Net Literacy has increased computer access to over 250,000 individuals, was highlighted in the National Broadband Plan presented to Congress and has been honored by two American Presidents. Kent has authored whitepapers on Digital Inclusion, Digital Literacy, Broadband Adoption, and works in Silicon Valley building products that use machine learning and artificial intelligence. His MBA is from the Yale School of Management and his Masters in Information and Data Science is from UC Berkeley.