The ethics behind artificial intelligence will determine what our future will look like. Consumers seem to know or sense this, and increasingly demand ethical behavior from AI systems of organisations they interact with. According to a new Capgemini study however, only half of the executives that were interviewed consider it important that AI systems are ethical.

The researchers found that:

  • Ethics drive consumer trust and satisfaction. In fact, organisations that are seen as using AI ethically enjoy a 44-point NPS advantage compared to those seen as not using AI ethically.
  • Among consumers surveyed, 62 percent said they would place higher trust in a company whose AI interactions they perceived as ethical; 61 percent said they would share positive experiences with friends and family.
  • Executives in nine out of ten organisations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with examples such as collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in banking and insurance. Additionally, almost half of consumers surveyed (47 percent) believe they have experienced at least two types of uses of AI that resulted in ethical issues in the last 2-3 years. At the same time, over three-quarters of consumers expect new regulations on the use of AI.
  • Organisations are starting to realise the importance of ethical AI: 51 percent of executives consider that it is important to ensure that AI systems are ethical and transparent.

How to address ethical questions

In the given scenario, can organisations work towards building AI systems ethically? The findings suggest that organisations trying to focus on ethics in AI must take a targeted approach to making systems fit for purpose. Capgemini recommends a three-pronged approach to build a strategy for ethics in AI that embraces all key stakeholders:

  1. For CXOs, business leaders and those with a remit for trust and ethics: Establish a strong foundation with a strategy and code of conduct for ethical AI; develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues.
  2. For the customer and employee-facing teams, such as HR, marketing, communications and customer service: Ensure ethical usage of AI application; educate and inform users to build trust in AI systems; empower users with more control and the ability to seek recourse; and proactively communicate on AI issues internally and externally to build trust.
  3. For AI, data and IT leaders and their teams: Make AI systems transparent and understandable to gain users’ trust; practice good data management and mitigate potential biases in data, and use technology tools to build ethics in AI.

Artificial intelligence will recast the relationship between consumers and organisations, but this relationship will only be as strong as the ethics behind it. Ethical AI is the cornerstone upon which customer trust and loyalty are built.

About the research

The Capgemini Research Institute surveyed 1,580 executives in 510 organisations and over 4,400 consumers internationally, to find out how consumers view ethics and the transparency of their AI-enabled interactions and what organisations are doing to allay their concerns.