Artificial Intelligence (AI) is slowly being adopted in most parts of our daily lives. McKinsey reports in its global survey that 56% of the companies surveyed adopted some form of AI in the past year, up from 50% in 2020[1]. However, it is often not straightforward how to implement AI techniques properly and responsibly. That’s why we focus on responsible AI as one of the most important technology trends of the coming year. A responsible AI is an AI that is implemented in an accountable, reliable, and secure way. It is complaint with the GDPR, mitigates security risks, and its workings can be explained to management and regulators.

Apart from the abovementioned common elements of responsible AI, there are more aspects to consider when investigating if AI is implemented in a responsible way. While often less visible to the outside world or even the developers of the AI models, they are certainly important to consider.

One of these aspects is the energy used in the training of advanced AI-models, often Deep Learning (DL) models. It is technically feasible to calculate the energy consumed in the training phase but can be difficult in practice, especially when trained in datacenters. This is partly due to the closed nature of datacenters and service providers about this information, but also due to the lack of insights in energy consumption by the individuals and organizations who develop, train, and implement these models.  Service providers do not provide energy consumption used to make the use of their data center as easy and simple as possible.

However, why is it even relevant to calculate the energy consumption? A recent [MM4] study has shown that the energy consumption of large training models can emit the CO2 equivalent of thousands of flight kilometers[2]. With rapid growth expected in the coming years due to more acceptance and use of AI, it is important to take the first steps to limit its energy consumption: the calculation, tracking, and accounting of energy consumption.  

To prevent AI and DL to become the next energy-guzzling technology, let’s keep three basic questions in mind when developing and training models:

1. Is an advanced AI model necessary for the problem at hand? 

Advanced AI models are often perceived as the perfect flagship for an organization, while the organization is not ready to effectively apply them. In addition, regular analytics or algorithms can sometimes be sufficient to solve the problem.

2. What are the requirements (e.g. required accuracy) of the AI model for the problem in the real world?

Pre-define the required accuracy of the AI model, the model does not always need to be perfect. The calculation time, energy consumption, and CO2 emissions increase exponentially with an increasing accuracy.

3. Explore the attributes and variables of the AI models, do you really need to adopt all of them?

Besides a reduction in calculation time, energy consumption, and CO2 emissions, less attributes and variables contribute to the controllability and explainability of the AI model.

We should always keep the side-effects of new technologies in mind, regardless of the opportunities it can bring. Especially if we can challenge it with basic questions. Are you interested in discussing responsible AI in the broadest sense or curious about how BearingPoint can support in implementing AI?

Feel free to contact me or Michiel Musterd!

Authors

Frank Kloosterman
Management Analyst

frank.kloosterman@bearingpoint.com

Michiel Musterd
Manager

michiel.musterd@bearingpoint.com

 

[1] https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2021

[2] https://arxiv.org/pdf/1906.02243.pdf