Trust is the key to unlocking the full potential of Generative AI (GenAI). However, a BearingPoint survey from 2023, Ethics in Generative AI, shows great mistrust among consumers of GenAI tools. In this white paper, we shed light on an innovative approach to increase trust in the perception of Generative AI by integrating ethical principles into its use. We focus on how organizations can establish user trust by dovetailing technological and organizational components.

Artificial Intelligence (AI) is being introduced in all areas of private and business life, with GenAI widely recognized as driving the rise of AI technology. Though GenAI can potentially drastically change how content is created, it also raises data privacy, copyright, and bias concerns.

Definition of GenAI

  • GenAI is a subcategory of AI that can generate text, images, or other media using generative models.
  • GenAI models learn the patterns and structure of their input training data and generate new data with similar properties.
  • The content created may be audio, code, images, text, simulations, or videos.
  • Unlike other forms of AI, GenAI’s unique proposition is the creation of something new.

Despite the explosive growth and media attention, our survey from 2023, Ethics in Generative AI, shows that the hype for GenAI and its applications is more theoretical than practical. One reason for GenAI missing in practice is the distorted perception of the technology and the associated lack of trustworthiness.

However, trust forms the foundation of interpersonal as well as human-machine interactions. For companies offering access to GenAI tools, creating a trustworthy environment for using GenAI is one of the key elements in realizing the full potential of such applications.

In this white paper, we now shed light on an innovative approach to increase trust in the perception of GenAI by integrating ethical principles into its use.

Our white paper, in short:

  • Trust in GenAI can be built by ensuring an application based on ethical standards.
  • The interaction of two separate levers is decisive – trust in AI technology and trust at the organizational level.
  • While the technological component ensures the lawful, ethical, and robust character of the technology, the organizational component enables the necessary competencies and support.

The development of trust in GenAI can be facilitated by operationalizing moral values, such as human autonomy, fairness, loss prevention, and explainability. For this, the interaction of the two separate levers - trust in AI technology and trust at the organizational level - is decisive, as only this interaction can establish the necessary trust needed to encourage human-machine interaction with GenAI tools.

The technological component

Under the Ethics Guidelines for Trustworthy AI set up by the European Commission, an AI system should be lawful, ethical, and robust throughout its life cycle. At the technological level, this can be reached through:

  • Objectivity: AI technology should be free of bias, distortion, and discrimination that could violate the rights, values, and interests of user groups.
  • Transparency: The models used should be comprehensible, explainable, and understandable for the developers, users, and other stakeholders.
  • Accountability: The liability of GenAI tools must be checked in advance and mechanisms must be implemented in order to diagnose and avoid errors (human-in-the-loop).
  • Security: AI algorithms should be robust, reliable, and resistant to internal and external errors, attacks, and disruptions.
  • Human centricity: AI technology must be used so that decisions are made with people at the center. The generated output must constantly be scrutinized in terms of the human dimension.

The organizational component

GenAI is perceived as more reliable if an organization’s members possess the necessary competencies to handle the technology and are simultaneously provided with a supportive corporate culture to develop skill sets, including:

  • Technical and soft skills need to be promoted so that employees gain confidence in handling GenAI, including ethical as well as compliance-related skills.
  • A corporate culture that provides employees with the space to engage intensively with GenAI to acquire the necessary skills and experience to use GenAI responsibly.
  • A clear strategy and goals to be achieved with GenAI must be promoted by company management.

Have we sparked your interest? Detailed information can be found in our white paper, "Ethics in Generative AI".

Download

  • White paper: Ethics in Generative AI – successfully building trust
    White paper: Ethics in Generative AI – successfully building trust 449.45 KB Download

Would you like more information?

If you want to get more information about this insight please get in touch with our experts who would be pleased to hear from you.