Businesses planning a short-term integration of generative AI into processes and systems must fully assess the risks it brings.

In 2022, we published our Tech Trends 2023 Radar. In it we identified generative AI as the top trending topic for businesses to follow.

A year later, the technology has mushroomed. Applications such as OpenAI’s ChatGPT are dominating headlines and breaking into mainstream use.

We have consulted with businesses in a variety of industries to assess the size and scope of generative AI’s potential. We’ve learned that, as progress continues unchecked, there are a variety of opportunities and threats facing businesses and their clients regarding the phenomenal growth of the technology.

As we look closer at this surge in the popularity of generative AI, we must ask these three key questions:

  1. What are the key issues around generative AI that must be resolved?
  2. What are the potential use cases of ChatGPT and wider generative AI?
  3. Will ChatGPT dominate over the long term?

What are the key issues around generative AI that must be resolved?

OpenAI’s interface offers many exciting possibilities for enterprise use, but there are a number of limitations that business users should be aware of before integrating the application into their everyday systems and processes.


At the highest level there hasn’t yet been any definitive ruling on the role governments and other entities are to play in protecting citizens and organizations from liability and misuse. (For example, the potential expansion of what is covered in the European Union’s Digital Services Act.) ChatGPT is a professional toolkit that requires dedicated resources put towards training and governance in the corporate context.

Intellectual property

IP and copyright law as it relates to artificial intelligence poses many questions around authorship and eligibility. Even AI-assisted rather than AI-created works may not enjoy legal protection. It also raises the question of whether IP-protected works were used in training the AI.

Quality of source information

Like a search engine, ChatGPT gathers information from across the internet, and internally from corporate data, to compose its outputs. But without a verification method built in, some of the research it uses to answer questions is outdated, inaccurate, or at worst, complete fiction – issues which can negatively affect output.

Implicit bias

Through no fault of their own, humans generating the data to train an AI may be operating with an unconscious bias, or unwittingly misrepresenting a full data set. This can lead to AI systems operating with the same biases as they write content or analyze synthetic data.

When feeding the algorithm with specific, high-quality data, the business potential of ChatGPT can be maximized. This is where we advise clients on finding the right use cases.

What are the potential uses of ChatGPT?

Despite these concerns, there are still countless uses for ChatGPT which can potentially benefit businesses, saving time and valuable resource with efficient text outputs.

A retailer that’s expanding product lines to incorporate hundreds or even thousands of unique products can use generative AI to populate the platform with optimized product names and tags, using only the product description and relevant features like size and color.

Workers trying to present long, drawn-out pieces of information can input them into the AI and ask for a summary, provided they then double-check the results for accuracy.

Programmers facing time or resource pressure can ask ChatGPT to write and debug code. Alongside the pieces of code, the tool can describe the role that each piece of code plays within the whole.

The tool has already shown itself to have myriad uses. But when it comes to realizing the potential of generative AI, ChatGPT is only the start. We think generative AI has a place at the heart of enterprise, helping businesses create efficiencies and save big on costs. But the issues around governance must first be addressed before the hype can swallow up our chances of a well-oiled AI/business interface.

ChatGPT may be the tool that’s broken through, but artificial intelligence is in many cases already fully integrated into enterprise processes.

  • Customer service uses AI-powered chatbots on its website, answering queries and directing customers to a relevant page.
  • Predictive maintenance algorithms comb through data from connected machinery such as factory production lines, or building infrastructure like lifts, to detect potential errors and breakdowns before they occur.
  • An AI trained with anonymized banking data can potentially detect fraudulent or other suspicious activity like identity theft.
  • Virtual assistants like Alexa live in our homes and cars, turning on the lights and helping stock the fridge.
  • Autonomous vehicles that operate without driver assistance are steadily moving through the levels of trial phases towards wider rollout.
  • The Artificial Intelligence Virtual Artist (AIVA) reads sheet music from the world’s greatest composers, and predicts what ‘should‘ come next by writing its own piece.
  • DeepFaceLab is the software that allows users to ‘deepfake’ or create videos that purport to feature particular persons, despite being digitally created from scratch.

Some of these tools have been more steadily integrated into business use, based on having a more concrete purpose and a firm set of controls that more narrowly define the extent of their operations. Others are prone to misuse and could potentially cause massive damage in a corporate context – specifically DeepFaceLab. ChatGPT should be the wake-up call that businesses need: employees are already deep into exploring its uses, so move at light speed to prioritize the definition and enforcement of controls. 

Will ChatGPT dominate over the long term?

Being one of the first iterations of generative AI apps to hit the market certainly gives ChatGPT an edge, but this doesn’t necessarily signal its dominance for years to come.

We liken the current sprint to market for generative AI platforms to the early days of search engines. Like search engines, ChatGPT has a variety of use cases which cover a wide array of domains and subjects. The key to harnessing the potential of ChatGPT is giving it the right information to use – and ChatGPT draws on all internet-published content up to 2021.

Google may be the dominant name in search today, but it was by no means first to market. Platforms such as W3Catalog, Yahoo! and AltaVista employed different ways to retrieve search results, and operated differently in many other ways – each preceding Google by some years.

There is still time for a more advanced, more sophisticated and even more accurate generative AI service than ChatGPT to emerge. For example, unlike always-on search engines, ChatGPT is not ‘smart’ – it doesn’t carry an internet connection. This means its database of information isn’t yet regularly updated. It cannot provide answers based on queries like ‘what time is it?’, nor can it provide the most up-to-date information. That being said, a fully connected version can’t be far away.

A recent release shows another quantum leap in ChatGPT potential – a team at Together has unleashed an open-source version under the Apache 2.0 license – with 20 billion parameters and trained on 43 million instructions.

BearingPoint believes enterprises must take the lead and address global governance of generative AI

On a global level, some countries are putting policies in place to protect employees, employers and markets against some of the more pressing potential challenges of AI, like “risks to human freedom and autonomy” and “consumer protection”.

The European Union is using ‘horizontal’ legislation to ringfence a broad level of regulation around the different applications of artificial intelligence. This means it hasn’t ruled anything permanently in or out; it’s a framework that’s flexible enough to adjust expectations as the technology develops and more entities become involved. On the other hand, we’ve seen China take a more ‘vertical’ approach by enforcing laws around specific use cases of AI, to protect workers and clamp down on price fixing.

While these larger projects are forming, it is the responsibility of the enterprise to manage the risks for now. But what still needs to be addressed so that this can be done comfortably and efficiently within an operational and strategic framework?

1. Protect critical business infrastructure and services. Perform stress tests and practice rapid response against potential AI-enabled intrusions.
2. Invest to safeguard your business against the impact of deepfake technology - in detection, protection and response capabilities, tools and processes.
3. Install a governance framework and put the processes in place to steer and control your use of AI across ethics, compliance, legal and risk, as you assess each business case.
4. Consider the environmental impact of training an AI – the carbon footprint generated by feeding it data, and whether you can create a sustainable solution.
5. Monitor trends regarding the regulation of AI and the various decisions being made at boardroom level, as well as by governments or other entities around the world.

We continue to consult with businesses on the potential strengths and weaknesses of a robust response to generative AI’s integration into the workplace. Our desire is to see companies equipped with the tools, as well as the knowhow, to keep a tight rein over ChatGPT and similar platforms, and usher in the next technological revolution on their own terms.

Would you like more information?

If you want to get more information about this insight please get in touch with our experts who would be pleased to hear from you.