What’s all the fuss about ChatGPT
OpenAI’s GPT, like Google’s PaLM 2 and Meta’s LLaMA, is an example of a large language model (LLM). It’s an advanced artificial intelligence system designed to understand and generate human-like language. At great expense and resource, these models have been ‘trained’ on vast amounts of data to perform tasks - like answering questions, generating text, translating text and more.
For example, GPT-4, which was released in March 2023, is widely reported to have been trained on significantly more data than its predecessor, GPT- , which was trained on 45 terabytes of text data.
This enables it to understand and create contextually appropriate responses - making it valuable for a variety of applications like virtual assistants and content generation.
Among the consumer-facing generative AI applications that have been launched, ChatGPT, developed by OpenAI, is trained on the GPT model, whilst Google Bard is trained on the PalM 2 model. Both are particularly suited to language processing and can generate text, images or audio in response to specific prompts.
However, despite their enormous ‘knowledge’, the results that LLMs provide will only ever be based on their pre- trained models. For example, ChatGPT’s training data only includes sources up until September 2021. If you asked it to name the current Prime Minister, it would still believe Boris Johnson is in post – although it will caveat that with a line about verifying this with up-to-date sources.
What’s more, most do not have access to real time information, so they are unable to tell you the current time.
So, how can you trust what GPT tells you?
There are always challenges when it comes to the adoption of revolutionary new technologies. And one of the most important considerations when it comes to deploying generative AI is its ability to tell the truth. To put it simply. LLMs provide information, but they're incapable of deciphering what is true and what is not. Any organisation needs to trust that the response they are giving to the end user is accurate. So, when it comes to generative AI applications there needs to be a process for testing the truthfulness, appropriateness, and bias of the information.
For example, if a consumer uses a generative AI chatbot, hosted on a retailer’s website, to ask about how to self- harm, the response can’t be a fact-based response that provides the person with advice on how to cause harm to themselves. While the response is truthful, it would be wholly inappropriate and clearly create several issues for the organisation.
So how can GPT be useful to an organisation?
"It is important to recognise that when we talk about how an organisation can deploy generative AI, we are not talking about simply deploying ChatGPT or Bard".
These examples demonstrate the limitations of consumer-facing generative AI applications in providing valuable and accurate responses. While they may be valuable to the public, they can be damaging when it comes to decision making. And these models will only ‘learn’ more once the developer has invested in retraining them, or other data sources have been embedded or indexed.
So, it is important to recognise that when we talk about how an organisation can deploy generative AI, we are not talking about simply deploying ChatGPT or Bard. In almost all circumstances we’re also not talking about creating brand new models and training them on new data – something that would be cost-prohibitive to most organisations. As the leading Solutions Integrator, we are harnessing the power of existing LLMs, like GPT-4 (its latest model), and embedding indexed enterprise data to deliver more reliable, trustworthy and accurate responses that meet an organisation’s demands.
In the case of the user that asked about self-harm. Indexing the retailer’s own data and embedding it in a LLM would lead to a very different outcome. It could mean that the question is intercepted immediately and flagged to the police or a list of help line phone numbers could be provided. Rather than the fact-based answer that the model learnt during its training.
At this point it is prudent to touch upon data security, because if you are embedding your valuable data into a LLM you need to know it is secure. We’ve already heard stories of employees using ChatGPT to check coding and, in doing so, unwittingly giving away corporate IP, which is why using these consumer-facing versions is simply not advisable. Running your data parallel to the model will enable you to keep your IP secure, while delivering a much more accurate response from the application.
How Insight can help
As Insight focuses on digital transformation in the public sector, it is natural to consider the role that artificial intelligence and large Language Models in particular will play. Insight can work with organisations to help them understand the advantages of incorporating generative AI into their organisation safely. We are already delivering customer workshops to discuss many of the issues raised in this article; we can also work directly with individual, existing or potential clients to help them navigate the generative AI maze effectively.
Learn more about how to get your organisation AI ready. https://uk.insight.com/en_GB/what-we-do/campaigns/generative-ai-roadshow.html
Heather Cover-Kus
Heather is Head of Central Government Programme at techUK, working to represent the supplier community of tech products and services to Central Government.
Ellie Huckle
Ellie joined techUK in March 2018 as a Programme Assistant to the Public Sector team and now works as a Programme Manager for the Central Government Programme.
Annie Collings
Annie is the Programme Manager for Cyber Resilience at techUK. She first joined as the Programme Manager for Cyber Security and Central Government in September 2023.
Austin Earl
Austin joined techUK’s Central Government team in March 2024 to launch a workstream within Education and EdTech.
Ella Gago-Brookes
Ella joined techUK in November 2023 as a Markets Team Assistant, supporting the Justice and Emergency Services, Central Government and Financial Services Programmes.