March 2022

Artificial intelligence (AI) is transforming businesses and society. After years of conducting proof of concepts (POCs) and experimenting with the capabilities of AI, the field and its use cases are becoming more mature. However, this increase in maturity and the development of more sophisticated models have increased in complexity and laid bare some of the shortcomings of current AI solutions, such as biased recommendations.

Tomas Chroust

It is no longer sufficient that AI applications perform well – their predictions need to be fair and easily explainable to clients and regulators.

Tomas Chroust, Data & Analytics and AI Leader at BearingPoint

Recent developments have started to raise questions about how to ensure that these advanced systems are fair, robust, interpretable and adhere to security and privacy requirements. That means businesses must take action to get on the path toward responsible AI and comply with the rules of corporate digital responsibility. Due to the growing importance and inherent impact of AI solutions on companies’ reputational risk, adequate KPIs should be monitored as part of the companies’ overall ESG reporting. Those impacts need to be analyzed and measured and include complete lifecycle assessments of data and processes to provide transparency of every business ecosystem. Transparency also includes sharing knowledge and educating employees and stakeholders about digital responsibility. For technical transparency, committees (eg. Ethical boards) are expected to align on their approach to “comply or explain” decisions.

Current AI algorithms such as deep neural networks can certainly provide accurate predictions. The difficulty of explaining the rationale and the outcome/prediction in a causal or deterministic way has led to these algorithms being called “black-box” models. Their opacity through complex decision layers makes model decision reasoning and determining the fairness of the solution nearly impossible.

Effective AI solutions are heavily dependent on the data they are trained on, which is why the datasets play a pivotal role in determining whether AI systems are ethically sound. Unbalanced and skewed datasets pose a severe problem as client-facing AI solutions often serve a broad user group. If, for example, a demographic or gender were omitted or not considered, how shall the AI solution provide fair results?

High-profile cases of biased and unfair AI applications highlight the limitations of many current AI systems

AI models are immensely efficient at learning complex relationships embedded in large datasets and using the input data to make predictions. But that also means that AI models will learn any accidental and unintentional bias and inaccuracies embedded in the data used to train the AI algorithm. In addition to the risk of having proprietary datasets with unfavorable properties, as just mentioned, companies must also carefully consider the risk and take appropriate measures when utilizing third-party datasets. Not only can these datasets suffer from bias and inaccuracies, but the data provider’s reputable impact and other related factors can also have severe implications for the company. It is required that third-party datasets meet the regulatory and ethical requirements of the purchasing company and align with the company’s standards, reputation and values.

Governance frameworks are required to fully understand black-box models and their inferences and predictions

AI algorithms derive individual’s data, for example, into a class or category, predicting the behavioral trait of the individual, such as credit risk, health status, etc., without exposing the reasons why. Not only is there a lack of transparency, but it is also problematic because of possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, all of which may lead to unfair or wrong decisions.

By implementing Responsible AI, companies can ensure that decisions made or suggested by AI solutions are ethical, fair, robust and explainable. Applying special techniques such as calculating feature importance or Shapley values helps explain models and the most influential features for making predictions. They also support causal inference and uncover hidden biases embedded in a model and the datasets used to train it. For example, suppose Shapley values indicate that a face detection algorithm heavily depends on the background in an image rather than the main characteristics of a human face. In that case, the issue could lie in the bad quality of the dataset or certain embedded biases impacting the model performance and rendering it unsuitable.

As a result of advancing digital transformation and more and more decisions being assisted or made by artificial intelligence, trustworthiness becomes essential. Given these developments, now is the time for firms to act and invest in a responsible AI program to meet regulatory requirements and minimize the legal, reputational, and economic risks of AI use while promoting trust-building and a competent and compliant presence in the market. Firms also ensure that their AI systems are used fairly and ethically. Typically, a dedicated responsible AI initiative can be embedded into an overall ESG/Sustainability program.

Frameworks and best practices are emerging to meet requirements for more robust and improved AI solutions

In September 2019, the first Swiss Global Digital Summit was conducted in Geneva to discuss ethics and fairness in the digital age. Rather than leading to another declaration on principles, the event marked the starting point of the Swiss Digital Initiative. The foundation is committed to embedding ethical standards in the digital world with concrete projects. Believing that digital transformation must always serve people and place their needs at the center, the Swiss Digital Initiative brings together stakeholders from the public and private sector as well as from academia and civil society.

Thanks to the Digital Trust Label, all organizations offering a digital service have the opportunity for the first time to have this service verified. The label has been officially launched in January 2022 with attendance from over 35 journalists around the world. Further strengthening their momentum of global presence, the Swiss Digital Initiative will also organise events at the World Economic Forum (WEF) 2022 in May where first insights from using the Digital Trust Label will be shared. Swisscom and Swiss Re are the first pioneers to have one of their digital services tested and successfully labelled (#digitaltrustchampions). Credit Suisse is currently in the auditing process and another seven companies have already registered for the labelling process. Successful certification sends a clear signal to users that an organization is serious about digital trust and going the extra mile. Such a signal is not only effective for users but also provides guidance to organizations in a rapidly changing regulatory environment – it can also be seen as a soft law instrument. Registration for certification is now open to all interested organizations.

What’s next?

Considering the developments within corporate digital responsibility, BearingPoint Responsible AI advisory team supports its clients ideally in establishing a governance framework for digital trust. We are an impactful combination of experts that will avoid reputational risk and promote the responsible use of AI.

BearingPoint house of Responsible AI

The BearingPoint house of Responsible AI covers all relevant elements to manage AI in a sustainable manner. Building up governance structures and highly specialized technological skills will guarantee robust, sustainable and trustworthy AI.

If you would like more information about our approach, please contact Tomas Chroust via the contact form below, who would be pleased to hear from you.

Would you like more information?

If you want to get more information about this insight please get in touch with our experts who would be pleased to hear from you