In 2022, we published our Tech Trends 2023 Radar. In it we identified generative AI as the top trending topic for businesses to follow.
A year later, the technology has mushroomed. Applications such as OpenAI’s ChatGPT are dominating headlines and breaking into mainstream use.
We have consulted with businesses in a variety of industries to assess the size and scope of generative AI’s potential. We’ve learned that, as progress continues unchecked, there are a variety of opportunities and threats facing businesses and their clients regarding the phenomenal growth of the technology.
OpenAI’s interface offers many exciting possibilities for enterprise use, but there are a number of limitations that business users should be aware of before integrating the application into their everyday systems and processes.
At the highest level there hasn’t yet been any definitive ruling on the role governments and other entities are to play in protecting citizens and organizations from liability and misuse. (For example, the potential expansion of what is covered in the European Union’s Digital Services Act.) ChatGPT is a professional toolkit that requires dedicated resources put towards training and governance in the corporate context.
IP and copyright law as it relates to artificial intelligence poses many questions around authorship and eligibility. Even AI-assisted rather than AI-created works may not enjoy legal protection. It also raises the question of whether IP-protected works were used in training the AI.
Like a search engine, ChatGPT gathers information from across the internet, and internally from corporate data, to compose its outputs. But without a verification method built in, some of the research it uses to answer questions is outdated, inaccurate, or at worst, complete fiction – issues which can negatively affect output.
Through no fault of their own, humans generating the data to train an AI may be operating with an unconscious bias, or unwittingly misrepresenting a full data set. This can lead to AI systems operating with the same biases as they write content or analyze synthetic data.
When feeding the algorithm with specific, high-quality data, the business potential of ChatGPT can be maximized. This is where we advise clients on finding the right use cases.
Despite these concerns, there are still countless uses for ChatGPT which can potentially benefit businesses, saving time and valuable resource with efficient text outputs.
A retailer that’s expanding product lines to incorporate hundreds or even thousands of unique products can use generative AI to populate the platform with optimized product names and tags, using only the product description and relevant features like size and color.
Workers trying to present long, drawn-out pieces of information can input them into the AI and ask for a summary, provided they then double-check the results for accuracy.
Programmers facing time or resource pressure can ask ChatGPT to write and debug code. Alongside the pieces of code, the tool can describe the role that each piece of code plays within the whole.
The tool has already shown itself to have myriad uses. But when it comes to realizing the potential of generative AI, ChatGPT is only the start. We think generative AI has a place at the heart of enterprise, helping businesses create efficiencies and save big on costs. But the issues around governance must first be addressed before the hype can swallow up our chances of a well-oiled AI/business interface.
Some of these tools have been more steadily integrated into business use, based on having a more concrete purpose and a firm set of controls that more narrowly define the extent of their operations. Others are prone to misuse and could potentially cause massive damage in a corporate context – specifically DeepFaceLab. ChatGPT should be the wake-up call that businesses need: employees are already deep into exploring its uses, so move at light speed to prioritize the definition and enforcement of controls.
There is still time for a more advanced, more sophisticated and even more accurate generative AI service than ChatGPT to emerge. For example, unlike always-on search engines, ChatGPT is not ‘smart’ – it doesn’t carry an internet connection. This means its database of information isn’t yet regularly updated. It cannot provide answers based on queries like ‘what time is it?’, nor can it provide the most up-to-date information. That being said, a fully connected version can’t be far away.
A recent release shows another quantum leap in ChatGPT potential – a team at Together has unleashed an open-source version under the Apache 2.0 license – with 20 billion parameters and trained on 43 million instructions.
On a global level, some countries are putting policies in place to protect employees, employers and markets against some of the more pressing potential challenges of AI, like “risks to human freedom and autonomy” and “consumer protection”.
The European Union is using ‘horizontal’ legislation to ringfence a broad level of regulation around the different applications of artificial intelligence. This means it hasn’t ruled anything permanently in or out; it’s a framework that’s flexible enough to adjust expectations as the technology develops and more entities become involved. On the other hand, we’ve seen China take a more ‘vertical’ approach by enforcing laws around specific use cases of AI, to protect workers and clamp down on price fixing.
We continue to consult with businesses on the potential strengths and weaknesses of a robust response to generative AI’s integration into the workplace. Our desire is to see companies equipped with the tools, as well as the knowhow, to keep a tight rein over ChatGPT and similar platforms, and usher in the next technological revolution on their own terms.