Since the official announcement in November 2022, there has been an enormous amount of buzz and excitement about OpenAI’s ChatGPT. Industry experts are publishing articles about it, social networks are filled with comments about it, and local, national, and global news organisations are reporting stories about it. From students using ChatGPT to complete assignments for class to me getting a little help from ChatGPT to write my latest ‘Virtual Viewpoint’ column, it certainly seems like everyone is testing it out.

As a specialist within the conversational AI space, Creative Virtual is excited about what ChatGPT and the technology behind it bring to our industry. We’ve been having lots of discussions with our customers and partners, as well as internally, about how this can deliver value to businesses using our V-Person™ solutions.

ChatGPT is an extremely powerful language model that is changing quickly and will continue to get more sophisticated. However, like any deep neural network, it is a black box which is hard – if not impossible – to control. Using it as a generative tool means you can’t steer in detail what it’s going to say.  You can’t deliver reliable, accurate self-service tools if you can never be certain what response might be given.

These limitations don’t mean you should write off ChatGPT or GPT-3 (and future versions) as completely ineffective in the realm of customer service and employee support. In some cases, one might be willing to accept a certain risk in exchange for very efficiently making large chunks of information available to a chatbot. Also there are ways to use the language power of GPT in a non-generative way, as we’ll explore in this post.

In any case, ChatGPT can only ever be used as just one piece of the puzzle, like content management, integration, user interface, and quality assurance. ChatGPT alone cannot replace all of that.

One of the design features of Creative Virtual’s conversational AI platform is the flexibility to integrate with other systems and technologies, including multiple AI engines such as transformer models like GPT-3. We are currently exploring the best way to interface with this model and use it to deliver value to our customers and partners.

Let’s take a closer look at ChatGPT, how it works, and the ways it can be used to deliver customer service and employee support.

What kind of AI is ChatGPT and how is that different from how V-Person works?

ChatGPT is a transformer model, a neural network, and is trained to predict text continuation. It uses a variation of GPT-3 which is OpenAI’s large language model (LLM) trained on a wide range of selected texts and codes. It is extremely powerful with respect to language understanding and common world knowledge. However its knowledge is not limitless and so on its own it will not have large parts of the information needed for specific chatbot use cases. Also its world knowledge is frozen at the time it was trained – currently it doesn’t know anything about events after 2021.

V-Person uses a hybrid approach to AI using machine learning, deep neural networks, and a rule-based approach to natural language processing (NLP). The machine learning component is integrated with workflow functionality within our V-Portal™ platform so enterprises can decide the best configuration for their conversational AI tool to improve in a controlled and reliable way. At the same time, natural language rules can be used as an ‘override’ to the machine learning part to ensure accuracy, resolve content clashes, and deliver very precise responses when needed.

We developed this approach to give our customers control over the AI to create accurate, reliable chatbot and virtual agent deployments. The use of natural language rules as a fallback option to fix occasional issues and finetune responses is much more efficient than trying to tweak training data.

Can businesses use ChatGPT to directly answer questions from customers and employees?

At the time of writing, ChatGPT is still in a research preview stage and highly unstable with no clean API available, so it’s not possible yet for businesses to use it in this way. However with its predecessor, InstructGPT, it is. It’s also worth noting that GPT-3 is high quality only in English and a few other languages which is another potential limitation for global use.

The biggest issue with using ChatGPT to directly answer questions from customers and employees is that it does not give you control over how it will respond. It could give factually incorrect answers, give answers that don’t align with your business, or respond to topics you’d prefer to avoid within your chatbot. This could easily create legal, ethical, or branding problems for your company.

What about simply using ChatGPT for intent matching?

There are two ways in which GPT-3 could be used for intent matching.

The first way just uses GPT-3 embeddings and trains a fairly simple neural network for the classification task on top of that. The second option also uses GPT-3 embeddings and a simple nearest neighbour search on top of that. We are currently exploring this last option and expect to get some quality gains from that approach.

Can I just provide a few documents and let ChatGPT answer questions by ‘looking’ at those?

Yes, this is absolutely possible. In fact, we have offered this functionality with V-Person for several years without needing GPT but none of our clients have been interested. GPT-3 improves the quality of this in most cases, but also comes with a higher risk of being very wrong. If an organisation is interested in using GPT-3 in this way, we can support it within our platform but what we currently offer already enables us to deliver document-based question answering.

It’s important to keep in mind that using ChatGPT to answer questions from documents is only addressing one aspect of the support expected from a virtual agent. For example, no transaction triggering API will ever be called by GPT looking at a document.

Is it possible to give GPT-3 a few chat transcripts as examples and let it work from them?

You can provide GPT-3 with sample transcripts and tell it to mimic that chat behaviour. But unless you want a chatbot with a very narrow scope, a few transcripts won’t be enough. If there are complex dialogue flows that need to be followed, you’ll need to provide at the very least one example of each possible path – most likely you’ll need more.

This raises some difficult questions. How do you maintain those if something changes? If you try to use only real agent transcripts, how do you ensure that you have complete coverage? How do you deal with personalised conversations and performing transactions that require backend integration? It may not be too difficult to train the model to say ‘I have cancelled that order for you’ at the right time, but that doesn’t mean GPT will have actually triggered the necessary action to cancel the order.

When you really examine this approach it becomes clear that this is not an efficient way to build and maintain an enterprise-level chatbot or virtual agent. It also doesn’t address the need to have integration with backend systems to perform specific tasks. Today our customers achieve the best ROI through these integrations and personalisation.

What other key limitations exist with using ChatGPT to deliver customer service or employee support?

Using a generative ChatGPT-only approach to your chatbot does not give you the opportunity to create a seamless, omnichannel experience. To do that, you need to be able to integrate with other systems and technologies, such as knowledge management platforms, ticketing systems, live chat solutions, contact centre platforms, voice systems, real-time information feeds, multiple intent engines, CRMS, and messaging platforms. These integrations are what enable a connected and personalised conversational AI implementation.

With ChatGPT there is no good way to create reliable and customised conversation flows. These flows are regularly used within sophisticated conversational AI tools to guide users step-by-step through very specific processes, such as setting up a bank account. This goes a step further than just creating a conversational engagement to employing slot-filling functionality, entity extraction, and secure integrations.

You also won’t have the ability to optimise the chatbot for the channels and devices on which it will be used. This includes using rich media – such as diagrams, images, videos, hyperlinks – within answers. For example, you can’t include an image carousel to display within a messenger platform. You won’t be able to show photos or drawings to help with a new product set-up. You don’t have the ability to display clickable buttons with options for the user.

As ChatGPT continues to change and moves out of the research preview stage, our expert team at Creative Virtual will stay on top of new developments and opportunities this technology offers. Our mission is always to innovate in a way that will help companies tackle their real challenges and deliver real business results – and our approach to this language model is no different.

If you’re interested in discussing more about how ChatGPT and V-Person might fit with your conversational AI strategy, get in touch with our expert team here.