Tag Archive for: AI

The changing face of Chatbots

By Maria Ward, Knowledgebase Engineer and Account Manager

We would all do well to heed the warning “with great power comes great responsibility,” when it comes to ChatGPT.  Today, industries of all sizes are making use of AI chatbots, voicebots and virtual assistants to great effect in delivering better customer and employee experiences. Consumers the world over have gotten used to and accept interacting with these ‘human-imitation’ tools to answer their questions, help them complete tasks, and guide them to resolve issues.

The face of AI chatbots, voicebots and virtual assistants is continually changing as technology advances, customer and employee demands and expectations increase, and how we communicate evolves.

Right now, there’s a lot of hype about the capabilities of ChatGPT and the large language models (LLMs) such as GPT-3 and GPT-4 that feed them.  This is not surprising given the far-reaching impact the technology can and is having across industries, professions, and social development.  People from all walks of life, from writers and developers to traders and architects, are saving time and improving efficiency by using the open AI to produce content.

The generative abilities of ChatGPT are astonishing to observe.  The capability of AI  has developed at a rapid pace over recent years, and we are at an inflection point where things are only going to develop at an ever-increasing pace!  The power of the latest LLMs is undeniable.  How this power is used practically in the design and implementation for conversational AI solutions will either add or detract from the customer and employee experience.  This is where businesses need to ensure they are speaking with experts in conversational AI.

ChatGPT in business settings

You can ask ChatGPT to give answers based on company documents, websites, and other sources of information.  When a question is asked and the answer available from the provided information source, the results are pretty reliable (but not 100%) at giving the correct information.  However, it is a recognized risk that ChatGPT will “hallucinate”, and do so very convincingly, when it cannot find the information from the source material.  This could cause significant reputational and even financial damage to a business.

So where does that leave organizations that are about to embark on their conversational AI journey and those who have existing solutions?  How do they ensure that their chatbot is powered by the latest technology and accurately represents their company?

Most organisations will not want to risk wrong information being provided to customers, however small that risk is.  Indeed, often it will be essential for public facing content to be signed off by legal departments.

Knowledge Management is king

The bottom line is that content is key when it comes to providing the best conversational AI experience. Managing the information that ‘feeds’ the chatbot is paramount for delivering the best experience to customers and employees.  It is, after all, this source information that enables accuracy of conversations.

The knowledge management system of Creative Virtual’s V-PortalTM platform allows businesses to embrace the power of  LLMs. And they can do so with the confidence that the ultimate responsibility for the content is in the hands of humans.  Putting humans in control ensures that no hallucinogenic (or incorrect) answers are provided.  Businesses remain in charge of what information their chatbot can and does share.

Knowledge Management System in V-Portal

V-Portal is a platform where you can manage answers for all channels, such as web, voice, social media, and agent assist, as well as enable language capabilities, and include personalized information.  The platform supports LLMs, including native support for GPT, which enables businesses to deliver greater personalization.

For example, using ChatGPT within the V-Portal platform you could enable a single signed off answer to be modified to change length, tone of voice (etc.) to suit the needs of the customer, based on the channel they are using. Integration with CRM systems provides knowledge about that customer to deliver an even more appropriate personalizd response.  This would be a minimal risk scenario using ChatGPT, that would enable organizations to deliver a new level of personalization.

Content creators can be assigned specific permission levels according to their needs by granting access to only the content relevant to them. Tasks can be set against all content types giving you the power to set actions and assign these to specific users, for example for content review.

Workflows and approvals are also an integral part of the platform, allowing control over what content makes the final cut. And, with a full version history you always have visibility of what was changed and when.

A fundamental part of the platform is flow design.  This allows you to build processes to hand-hold customers through the decisions they need to make to complete their journey.

Thinking about it, V-Portal serves as the engine that brings together the great power of AI and the human intervention and judgement needed to ensure that great responsibility is maintained.

There’s more to GPT than the ChatGPT headlines

By Chris Ezekiel, Founder and CEO

AI attention over the past couple of months has been quite astounding.  Especially considering that it has been part of our every-day lives for so long already.  Face-ID to open our mobile phones – AI.  Social media – AI.  Voice Assistants like Siri and Alexa – AI.  Route mapping – AI.  Its attractiveness and almost universal application is undeniable, with global spending on AI-centric systems expected to reach $154 billion this year according to IDC.

Used extensively across every industry, delivering huge benefits and rewards, AI has been transforming businesses and the workplace for decades. In healthcare it is used in radiology results analysis and robotic assisted surgery, in banking it is the core of fraud detection, and in retail and e-commerce it is the behind personalisation, integral to effective inventory management and powers customer service chatbots.

AI used in education is enabling personalised learning and in transportation it has made driverless vehicles possible.  Businesses of all sizes are using it to deliver better customer and employee experiences with more human-like chat and voice bots, to strengthen cybersecurity, and to build more effective workflows and operations.  And these are just a few of the myriad of applications.

At Creative Virtual we have been at the forefront of helping businesses build better customer and employee experiences with innovative conversational AI solutions for nearly 20 years. Since inception we have always ensured that we practice ethical technology. We see this as a fundamental responsibility that we take seriously, and it is something that our customers expect.

There has always been adoption hesitancy when it comes to new technologies.  Issues such as job erosion, privacy, surveillance, behavioural manipulation, fake news, changing labour force/job erosion and bias are just some of the topics that come up in impact analyses of advanced technologies.

Whilst practising ethical tech comes naturally to Creative Virtual, we are not blind to the fact that there are bad actors exploiting technology and many questions on the wisdom of a concentration of power among a small elite of tech giants, with no regulatory oversight.

It is critical that the business models of companies in a position of power have robust systems in place that take account of legislative and ethical responsibilities relating to the privacy, security, discrimination and misinformation and other issues of today.

Technology has always moved faster than legislation and regulation, most recently for example we have seen this with Uber.  This is also the case in relation to AI, with questions being raised on whether self-regulation is strong enough to safeguard the rights of individuals, protect, promote and support cultural diversity, halt the spread of mis/false information, and ensure adherence to data and privacy legislation.

The continual questioning and interrogation of the social and economic impacts of new technology must happen concurrently with tech advancement and progress, with one not stopping the other.

It seems, from the media coverage at least, that ChatGPT took the world by surprise when it burst onto the scene back in November 2022.  Almost overnight, the world became obsessively captivated by AI.

The focus by mainstream media on AI technology might be new, triggered by ChatGPT, but it has been around (admittedly not as powerful or impressive – but technology is always evolving and improving) and used by businesses for quite a long time.

Nevertheless, it is being seen as ‘new’ in terms of massification and consumerisation which has led to a lot of AI hype filling print, digital and broadcast media platforms.  It has been simultaneously sensationalised, in some cases demonised, and satirised.

Most recently, it has also been politicised. We have all read that countries such as Russia, China, North Korea, Cuba, Syria, Iran and Italy have banned ChatGPT.  It should be noted that Italy is a bit of a different case to the other countries. The Italian government has banned the application because they are concerned with privacy and data, stating that “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users.”

Russia, China etc. have banned ChatGPT because they either have strict rules against the use of foreign websites and applications, full on restrictions in the use of even the internet, or they have strict censorship regulations.

ChatGPT is also on the ‘watch-list’ of several countries. France, Germany, Ireland and the UK have indicated that they will be monitoring the use of the application closely for “non-compliance with data privacy laws”, and they have also raised concerns about algorithmic bias and discrimination.

It is not only governments that are questioning AI, a moratorium on the development of AI has been proposed by tech celebrities Elon Musk and Steven Wozniak.  They have put their signatures to an open letter, along with prominent and respected AI researchers such as Yoshua Bengio, Stuart Russel and Gary Marcus, asking for “… all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.  This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Of course, the concerns voiced by governments, individuals and the industry, are valid.  They are centre stage in the ongoing debate of the ethics-morality-technology-societal quadrangle. This discussion should continue. However, proposing a pause or ban on technological advancement is not a sensible or necessary response.

As previously mentioned, Creative Virtual’s approach when building and deploying conversational AI solutions is from a position of responsibility, deploying tech in an ethical and moral way. This includes understanding the intended outcomes of the conversational AI technology solutions we build and deploy, and to debate potential unintended outcomes so we can mitigate these.

For example, it is a known that GPT-3, 3.5 and even the latest model GPT-4 are not fully reliable (humans are not reliable all the time either!) and there is no guarantee of 100% accuracy.

Yes GPT-4 is more accurate than previous versions, but it does still “hallucinate”, can give inaccurate information and harmful advice. Businesses must be able to mitigate the risks this poses to avoid financial and reputational damage and be in control of the information their company is sharing.

To ensure organisations are in control, Creative Virtual supports large language models (LLMs) including the latest versions of GPT, but we remain uncompromising on not transferring total authority to machines.

Our conversational AI solutions provide a signature blend of AI and rules-based natural language processing (NLP), with the AI component compatible with workflow functionality to allow for customisable configuration options. It also means that our systems improve continuously in a reliable way that meets the needs of an organisation.

At the same time, natural language rules can still be used to enable control over responses in instances when AI answers are insufficient. Our blended approach ensures accuracy, enables the resolution of content clashes, and delivers very precise replies when needed.

This level of enterprise-grade functionality differentiates Creative Virtual’s conversational AI platform from all others on the market today. Providing this high degree of control over the AI is critical for businesses. Organisations can be confident in the accuracy of what they communicate to customers and employees because we enable human judgement to be applied to the information created by AI.

Recent commentary about ChatGPT has highlighted examples of its imperfections as well as potential immediate and longer-term social implications. At Creative Virtual we make it possible for our customers to mitigate these risks whilst still enjoying the business benefits of large language models, specifically GPT-3.5 and 4.0 today.

Using our V-Person technology, real business concerns regarding the security, data, privacy, and accuracy aspects related to information sharing are moderated, and organisations retain full control over AI output.

We are already working with customers and introducing LLMs as part of their conversational AI solution for tasks to deliver better customer, employee and contact centre agent experiences.  After identifying specific use cases we are piloting a number of GPT capabilities that are changing the playing field, including vector matching, summarisation, text generation, translation, clustering/analysis, Q&A preparation, and using generative AI.

Implementing LLMs requires experience and expertise, especially given the rate at which AI is developing.  In conversational AI, knowledge management is critical. Creative Virtual’s orchestration platform – V-Portal – supports LLMs, enabling businesses to maximise the benefits of the latest technology and can do so safely, securely and with confidence.

Our V-Portal platform combines knowledge management with workflow management and user management, supports multiple versions of answers for a single theme which gives granular control over the responses given, and allows for optimisation for individual channels.  It also has the capability to manage multi-lingual solutions within a single knowledgebase.

Organisations using our V-Portal platform have options for presenting users with a specific response based on a variety of criteria, including channel, authenticated user profile or selected language.  The platform also supports the use of rich media such as diagrams, images and videos in addition to text and hyperlinks within answers.

The flexible architecture of V-Portal enables seamless integration of our conversational AI solutions into existing processes and technology infrastructure, ensuring business continuity and as a cloud-based solution upgrading to take advantage of the newest technologies and stay ahead of the competition could not be easier.

AI will continue to capture headlines for many years to come.  The good, bad and ugly will be debated.  How society, businesses and individuals choose to use AI is a big part of the positive impacts it will have.

As a business tool, conversational AI solutions powered by the latest in AI technology advances can supercharge employee, customer and contact centre agent experiences, whilst also delivering cost and efficiency savings, and improving productivity.

It’s all about having the right conversational AI partner who understands the technology, business challenges and responsibilities, and can build and deploy solutions that meet the real needs of business.

Contact us to find out more on how LLMs can help you deliver better employee, customer and contact centre agent experiences.

​Generation AI: Growing up side-by-side with our silicon-based contemporaries

By Olaf Voß, Lead Application Designer

I was born in 1966. That means I’m usually sorted into Generation X. But these days, looking back at the past 57 years, I think we should really rename it to Generation AI. It has been my generation having witnessed AI from its infancy to the breakthroughs we’ve seen in the past few years. And with a bit of luck most of us will witness how AI will be reshaping our societies – for good or bad – in the next 20 years.

So let me give a recount of my encounters with AI throughout the decades.

There’s no way around it: I have to start with ELIZA, which Joseph Weizenbaum developed in the year I was born. I was too young to be aware of this when it was new of course, and even less aware that chatbots one day would play a major part in my professional life, but merely 16 years later I had access to Commodore computers at our computer club in secondary school. The ‘large’ one had 16 kb RAM AND a floppy drive! And we had an ELIZA clone running on them. But I admit I didn’t spend much time with her, I was far too busy with freeing princess Leia in an early text adventure or writing my own very simple games.

​I had my first chess computer at some time around 1980. It could analyze to a depth ​of 5-6 moves. I was an OK, but not great, player and I won against it when giving it up to 30 seconds thinking time and lost at above 2 minutes. In the early 90s I became a club player and soon after I didn’t have the slightest chance against the chess programs of that era even with 10 seconds thinking time. No need to be embarrassed about that I guess, since Gary Kasparov lost against Deep Blue in 1997.

Around 2005 I started playing Go. I was a convert from chess, and by that time I was used to having great chess programs available for training and analysis. Go has a much greater branching factor than chess and is much less suitable for static board state evaluation. With the available technologies of those days, programs could play Go at a mediocre level at best. Well, I still lost to them, but they were not strong enough to rely on their judgement. At that time most Go players, including myself, (apart from thinking that their game is much better than chess) thought it would take at least 50 years until computers could crack Go. It took about 10.

​How did they succeed​ so quickly? Deep neural nets. I read about those first as a university student in the 80s and was thrilled. I played with them a bit on the first computer I owned, an Atari 520st. I quickly thought about applying them to chess. My ideas were not very far from what is done in that field today, but of course I hadn’t heard about reinforcement learning at that time. I very much like to believe my ideas were extremely clever and would have worked. Alas, we’ll never find out, because it was clear very quickly that the hardware (especially mine!)  at that time was totally inadequate for tackling this problem.

​With what I’ve given away about myself so far nobody will be surprised to hear that I ended up becoming a software developer. Around the turn of the millennium I started to work on chatbots. We must have been one of the first few companies worldwide to tackle this commercially. ​At that time I was reluctant to say that I was working in AI. We were using pattern matching, and even a pretty simple form of that. Of course pattern matching IS an AI technique, but I was aware that with our focus on building chatbots that were useful for our customers in their restricted scope and on doing that efficiently, what we did would have bored any AI researcher. I wasn’t ashamed of what we were doing – quite the contrary: I was proud about what we could achieve with our pragmatism. I just wanted to avoid the pointless discussion if what we did was ‘real’ or ‘interesting’ AI.

​Fast forward another 15 years or so.​ Word embeddings came up and made it possible to tackle natural language problems with artificial neural nets. So I welcomed those back into my life. Only now the hardware was somewhat better plus I got paid for playing with them. Heavens!

Then comes 2020 and GPT-3. That was mind-blowing. I’ve heard people characterising deep neural nets as ‘glorified parameter fitting’. And sure, parameter fitting is all there is to it. But these parameters, each by itself just a dumb number, let something pretty astonishing emerge. I am not an expert on these topics and I am not even sure how much sense it makes to compare human and artificial intelligence. But sometimes I feel provocative and want to ask how we can be sure that our own intelligence is more than just ‘glorified synaptic strength fitting’. Once again I think the discussion about ‘real’ intelligence is far less important than considering what can be done. And, since there’s so much more that can be done today, how that will change our world.

Apart from being fascinating and super-relevant to the field I’m working in, GPT-3 is also a lot of fun of course. I remember a conversation I had with it in the early days, before OpenAI put the brakes on it to avoid harmful responses. (Which I applaud, in spite of it spoiling some fun.) I asked it  – actually before the start of the war in Ukraine – for a couple of suggestions about ‘how to achieve world peace’. One of the suggestions was: ‘Kill all humans.’ Well yes, job done … I’m still glad you are not yet in charge, you know!

I want to mention two more recent developments, even though they relate to me personally in a tangential way at best. Being a physicist by education I follow scientific developments closely, which brings me to AlphaFold 2. When I was 4 years old, the first successful DNA sequencing attempts were made. In 2020 AlphaFold 2 predicted and published 3D structures of thousands and thousands of proteins based on DNA sequences alone. Another loop closed during my lifetime. I make the prediction that in 2035 more than 50% of all newly approved drugs will have been developed using the results of AlphaFold or its successors at some stage in the process.

The second one is CICERO. As an avid board game player I reluctantly admit that I have never played Diplomacy, though I did play similar games like Risk or Civilisation. Diplomacy is a conflict simulation game in a WW1 scenario. It involves tactical moves on the board and lots of diplomacy around it – pacts – betrayals – revenge. CICERO can play this game on par with human experts. Apart from making clever moves on the board – easy-peasy these days for AI – it has also to negotiate with other players in natural language. So it needs to bring together strategic, natural language and social skills. Even though this is a model with a  niche application scope, I think it is at least as impressive as GPT-3, if not more so.

​​We are living in exciting times. And I think it’s important to understand that we are seeing the beginning of something, not the end. What will be possible in 20 years? Many things will happen, not all of them good. I’m not much of an expert in AI risks and besides, discussing them here in detail would go far beyond the scope of this blog post. Still I’m asking myself, how we as a society will cope with these – at the moment still largely unknown – changes.

My role at Creative Virtual involves looking at all the new technologies that pop up and evaluate if and how we can use them. So I have a bit of a front row seat in watching this unfold. I encourage you to check out our ChatGPT, GPT-3, and Your Conversational AI Solution blog post for a closer look at how we see these recent developments fitting with the work Creative Virtual does in the customer service and employee support space.

I think it is of the utmost importance that as many people as possible have a basic understanding about what’s going on. An ignorant society will not be able to react. I am trying to play my small part by sharing my knowledge with as many people as possible. Just recently, when an old friend of mine called, my wife burst out: ‘Great, now you can talk his ears off!’

ChatGPT, GPT-3, and Your Conversational AI Solution

By Chris Ezekiel, Founder & CEO

Since the official announcement in November 2022, there has been an enormous amount of buzz and excitement about OpenAI’s ChatGPT. Industry experts are publishing articles about it, social networks are filled with comments about it, and local, national, and global news organisations are reporting stories about it. From students using ChatGPT to complete assignments for class to me getting a little help from ChatGPT to write my latest ‘Virtual Viewpoint’ column, it certainly seems like everyone is testing it out.

As a specialist within the conversational AI space, Creative Virtual is excited about what ChatGPT and the technology behind it bring to our industry. We’ve been having lots of discussions with our customers and partners, as well as internally, about how this can deliver value to businesses using our V-Person™ solutions.

ChatGPT is an extremely powerful language model that is changing quickly and will continue to get more sophisticated. However, like any deep neural network, it is a black box which is hard – if not impossible – to control. Using it as a generative tool means you can’t steer in detail what it’s going to say.  You can’t deliver reliable, accurate self-service tools if you can never be certain what response might be given.

These limitations don’t mean you should write off ChatGPT or GPT-3 (and future versions) as completely ineffective in the realm of customer service and employee support. In some cases, one might be willing to accept a certain risk in exchange for very efficiently making large chunks of information available to a chatbot. Also there are ways to use the language power of GPT in a non-generative way, as we’ll explore in this post.

In any case, ChatGPT can only ever be used as just one piece of the puzzle, like content management, integration, user interface, and quality assurance. ChatGPT alone cannot replace all of that.

One of the design features of Creative Virtual’s conversational AI platform is the flexibility to integrate with other systems and technologies, including multiple AI engines such as transformer models like GPT-3. We are currently exploring the best way to interface with this model and use it to deliver value to our customers and partners.

Let’s take a closer look at ChatGPT, how it works, and the ways it can be used to deliver customer service and employee support.

 

What kind of AI is ChatGPT and how is that different from how V-Person works?

ChatGPT is a transformer model, a neural network, and is trained to predict text continuation. It uses a variation of GPT-3 which is OpenAI’s large language model (LLM) trained on a wide range of selected texts and codes. It is extremely powerful with respect to language understanding and common world knowledge. However its knowledge is not limitless and so on its own it will not have large parts of the information needed for specific chatbot use cases. Also its world knowledge is frozen at the time it was trained – currently it doesn’t know anything about events after 2021.

V-Person uses a hybrid approach to AI using machine learning, deep neural networks, and a rule-based approach to natural language processing (NLP). The machine learning component is integrated with workflow functionality within our V-Portal™ platform so enterprises can decide the best configuration for their conversational AI tool to improve in a controlled and reliable way. At the same time, natural language rules can be used as an ‘override’ to the machine learning part to ensure accuracy, resolve content clashes, and deliver very precise responses when needed.

We developed this approach to give our customers control over the AI to create accurate, reliable chatbot and virtual agent deployments. The use of natural language rules as a fallback option to fix occasional issues and finetune responses is much more efficient than trying to tweak training data.

 

Can businesses use ChatGPT to directly answer questions from customers and employees?

At the time of writing, ChatGPT is still in a research preview stage and highly unstable with no clean API available, so it’s not possible yet for businesses to use it in this way. However with its predecessor, InstructGPT, it is. It’s also worth noting that GPT-3 is high quality only in English and a few other languages which is another potential limitation for global use.

The biggest issue with using ChatGPT to directly answer questions from customers and employees is that it does not give you control over how it will respond. It could give factually incorrect answers, give answers that don’t align with your business, or respond to topics you’d prefer to avoid within your chatbot. This could easily create legal, ethical, or branding problems for your company.

 

What about simply using ChatGPT for intent matching?

There are two ways in which GPT-3 could be used for intent matching.

The first way just uses GPT-3 embeddings and trains a fairly simple neural network for the classification task on top of that. The second option also uses GPT-3 embeddings and a simple nearest neighbour search on top of that. We are currently exploring this last option and expect to get some quality gains from that approach.

 

Can I just provide a few documents and let ChatGPT answer questions by ‘looking’ at those?

Yes, this is absolutely possible. In fact, we have offered this functionality with V-Person for several years without needing GPT but none of our clients have been interested. GPT-3 improves the quality of this in most cases, but also comes with a higher risk of being very wrong. If an organisation is interested in using GPT-3 in this way, we can support it within our platform but what we currently offer already enables us to deliver document-based question answering.

It’s important to keep in mind that using ChatGPT to answer questions from documents is only addressing one aspect of the support expected from a virtual agent. For example, no transaction triggering API will ever be called by GPT looking at a document.

 

Is it possible to give GPT-3 a few chat transcripts as examples and let it work from them?

You can provide GPT-3 with sample transcripts and tell it to mimic that chat behaviour. But unless you want a chatbot with a very narrow scope, a few transcripts won’t be enough. If there are complex dialogue flows that need to be followed, you’ll need to provide at the very least one example of each possible path – most likely you’ll need more.

This raises some difficult questions. How do you maintain those if something changes? If you try to use only real agent transcripts, how do you ensure that you have complete coverage? How do you deal with personalised conversations and performing transactions that require backend integration? It may not be too difficult to train the model to say ‘I have cancelled that order for you’ at the right time, but that doesn’t mean GPT will have actually triggered the necessary action to cancel the order.

When you really examine this approach it becomes clear that this is not an efficient way to build and maintain an enterprise-level chatbot or virtual agent. It also doesn’t address the need to have integration with backend systems to perform specific tasks. Today our customers achieve the best ROI through these integrations and personalisation.

 

What other key limitations exist with using ChatGPT to deliver customer service or employee support?

Using a generative ChatGPT-only approach to your chatbot does not give you the opportunity to create a seamless, omnichannel experience. To do that, you need to be able to integrate with other systems and technologies, such as knowledge management platforms, ticketing systems, live chat solutions, contact centre platforms, voice systems, real-time information feeds, multiple intent engines, CRMS, and messaging platforms. These integrations are what enable a connected and personalised conversational AI implementation.

With ChatGPT there is no good way to create reliable and customised conversation flows. These flows are regularly used within sophisticated conversational AI tools to guide users step-by-step through very specific processes, such as setting up a bank account. This goes a step further than just creating a conversational engagement to employing slot-filling functionality, entity extraction, and secure integrations.

You also won’t have the ability to optimise the chatbot for the channels and devices on which it will be used. This includes using rich media – such as diagrams, images, videos, hyperlinks – within answers. For example, you can’t include an image carousel to display within a messenger platform. You won’t be able to show photos or drawings to help with a new product set-up. You don’t have the ability to display clickable buttons with options for the user.

 

As ChatGPT continues to change and moves out of the research preview stage, our expert team at Creative Virtual will stay on top of new developments and opportunities this technology offers. Our mission is always to innovate in a way that will help companies tackle their real challenges and deliver real business results – and our approach to this language model is no different.

If you’re interested in discussing more about how ChatGPT and V-Person might fit with your conversational AI strategy, get in touch with our expert team here.

Will AI be 2023’s Co-worker of the Year?

By Mandy Reed, Global Head of Marketing

It’s that time of year when business predictions from analysts, experts, and industry insiders start to make an appearance. Not surprisingly, artificial intelligence (AI) is featuring prevalently in predictions for 2023.

For example, the analysts at Forrester are predicting that AI will become an indispensable and trusted enterprise co-worker next year:

“Rapid progress in areas of fundamental AI research, novel applications of existing models, the adoption of AI governance and ethics frameworks and reporting, and many more developments will make AI an intrinsic part of what makes a successful enterprise.”

This prediction is not shocking or out of left field. A Harvard Business Review article published in 2017 referred to AI, particularly machine learning, as ‘the most important general-purpose technology of our era’. Today a majority of enterprises have already made significant investments in AI and are seeing positive results. These successes are laying the groundwork for further investment and expansion across industries, departments, and use cases.

I particularly like the use of the word ‘co-worker’ in this prediction. Just like any other employee, these AI applications will be a part of the team and require human collaboration to be successful. AI is not poised to take over the enterprise. Instead, it is being used to support the organisation’s goals and help its human colleagues perform their jobs better.

Any AI or machine learning tool can only be successful if it has the right human co-workers. Humans are needed to create the application, train the system, monitor the performance, and perform necessary maintenance. Humans are needed to identify which tasks should be automated with this technology and which are better performed by a real person. Humans are needed to make decisions about when the system should be able to ‘learn’ automatically and when it needs a human-in-the-loop to make that judgement.

In return, the human co-workers benefit from having mundane tasks and processes automated so they can focus on more complex work. Human contact centre agents benefit from easy access to information so they can focus on providing compassionate, emotionally intelligent engagements. Human employees have instant access to IT and HR support online so they can easily get help regardless of when or where they are working.

The advancements in AI over the past several years have contributed to a growing list of practical and beneficial use cases. Enterprises are seeing success with AI-backed customer service, employee training, customer onboarding, personalised sales, advertising, content generation, code writing, product performance tracking – the list goes on. And they are seeing success because of the humans involved with creating, optimising, and using these tools.

Are you making AI an indispensable part of your 2023 plans? Will AI become a trusted co-worker for members of your team in the coming year? As with any prediction, it will be interesting to see how this one plays out within organisations next year.

If you’re looking at adding conversational AI to your 2023 strategy, the team at Creative Virtual can help. Our V-Person™ technology puts you in control of the AI so you can better care for your human employees, contact centre agents, and customers with strategically designed automated support. Request your personalised demo with an expert member of our team to learn more.

Don’t Let Your CX Become a Battle Between Humans and Technology

By Chris Ezekiel, Founder & CEO

Running a company automating customer service processes, I’m always thinking of new ways we can help organisations to make life easier for their customers. CX Day – the annual international celebration of all things customer experience – is always a good time to reflect on how far things have come (or not!) in the industry overall. I’m always considering my own customer experiences in this regard, and three recent experiences standout:

Experience 1: the (more than usual) expensive trip to the dentist

A trip to the dental hygienist reminded me that no matter how good the technology, it’s people who write the software and design the user interface. I sat in the chair as the hygienist cursed about the new software that she was required to use. This is a case whereby bad design was impacting both the internal user and the customer’s experience. As the frustrated and cursing dental hygienist was banging away on the keyboard, the customer (me!) wasn’t being served (and paying for the privilege!).

The lesson here is that no matter how smart technology can be, there is no substitute for proper real-world consultation and testing. It reminds me of the days of locking coders in a room for long hours and sliding pizzas under the door every eight hours. Don’t put your customer experience at the mercy of coders who are cut off from or out-of-touch with the needs of your end users.

Experience 2: the computer says “NO”

A recent flight from Amsterdam to London transcended into the ridiculous. As I arrived at the departure gate there was an immediate feeling of foreboding as I could see a large group of people talking to the airline departure staff. I was quickly briefed by a fellow passenger who was incredulous as he explained that our aircraft was just about to depart with ONE passenger onboard. The airline staff at the gate were explaining that a human error had been made whereby the incorrect flight on the computer system had been cancelled and the system was now only allowing the check-in of one passenger. None of us could quite believe it. The computer wasn’t allowing the staff to re-check-in the passengers and for about half an hour it looked like the staff were really going to allow the aircraft to depart with just the one passenger on-board. This was the most ridiculous case of ‘the computer says “NO”’ I think I’ve ever encountered!

There’s much debate about computers and artificial intelligence one day taking over the world, but this experience made me realise that in some ways computers already have. There was seemingly no way for humans to override the computer’s ‘decision’, and it was only after many frantic conversations with the airline’s IT team that they were able to slowly start to re-check-in passengers for the flight. There was no human override, no option for human discretion (computers don’t do discretion!) to save a failing customer experience.

Experience 3: stupid AI

Listening to the media, one could be forgiven for believing that our smartphones have become synonymous to a lifelong partner who finishes the other’s sentence before they’ve even thought about what to say. The reality is far removed from this though. A recent experience of receiving a phone number within an email which contained a space between the country code and the phone number proved how far we haven’t come. I tried to copy and paste the number to make a call, and the smartphone couldn’t deal with the missing leading zero. Something so obvious to a person was not at all clear to the “smart” phone. I had to open the notes app and edit the phone number before I could copy and paste it into the phone app. I’m sure many of us have experienced something similar.

While this issue sounds like a minor inconvenience, it’s these small things that can often make-or-break your customer experience. It’s important to keep our expectations of the capability of computers and AI based in reality. We need to be calling out these basic shortcomings to stop us stumbling down a route that leads to a worse customer experience.

However, it’s not all doom and gloom. There is a proper, best practice way of applying the technological/ artificial intelligence revolution to greatly improve our experiences without the risks I’ve outlined here. And it’s a simple solution: combining humans with technology to work together in harmony.

Within the conversational AI field that Creative Virtual operates in, it’s about having a solution that enables organisations to turn a dial to decide between the machine learning and human elements of customer service, whilst at the same time making this seamless from the customer perspective. This has always been our vision at Creative Virtual, even when this method was unpopular with the research analysts. And as we approach our twenty-year anniversary, it’s what gives me confidence in Creative Virtual’s continued success.

Humans should control the training of machines, not algorithms alone. Humans should drive the design of digital tools, not technology specs alone. Humans should determine when a technology override is needed, not computer software alone. Humans should always be at the heart of a customer experience strategy, not technology alone.

That’s why today, on CX Day and the second day of Customer Service Week, we celebrate the humans that make great experiences possible. I’m proud to lead a team of experts that do this every day. It is our human collaboration with our customers and partners that enable our conversational AI solutions to play a key role in better CX.

Conversational AI Doesn’t Have to Be a Risky Investment: Step 3

By Mandy Reed, Global Head of Marketing

Conversational AI is a technology that is regularly described as ‘innovative’ and ‘cutting-edge’. Simply having ‘AI’ in the name makes some people think of it as being futuristic or only for companies with the resources to implement it for the cool factor. It can be easy for business leaders to associate conversational AI with being a high-risk investment.

For many companies, proven and reliable results are more important than being innovative and flashy. Projects that get budget approval and management backing are ones that are considered safe bets because they utilize established technologies that have documented business benefits. They don’t have the financial flexibility or company culture to take a high level of risk, whether that risk is real or inferred.

The good news is that conversational AI projects don’t have to be risky. In this blog series, I’m sharing three steps for achieving conversational AI success while minimizing the risk. You shouldn’t let the common misconception that conversational AI has to be a high-risk investment keep you from implementing it to improve your customer experience and employee engagement.

The previous posts in this series covered the first two steps to minimizing your risk:

Once you’ve read through those steps, you’ll be ready for number three:

Step 3: Start with a pilot and expand with a staged approach.

Before you go all in with a conversational AI project, look to do a pilot or proof-of-concept (POC) with the vendor. This gives your organization the opportunity to test out the technology on a limited basis to make sure it is a good fit for you and your digital strategy. The financial risk associated with this pilot should be shared by the vendor.

Typical pilots run for 30-60 days which will provide sufficient time for you to see results, evaluate initial performance, and make decisions about taking the next step in your conversational AI plan. A successful pilot strengthens your business case and enables you to finetune your strategy based on real feedback and user interactions. Also be sure to use the pilot phase as an opportunity to test integration points to ensure your solution will work end-to-end as you expand the deployment.

Starting with a pilot, and sharing that financial risk with the vendor, makes moving forward with a larger conversational AI investment less of a gamble for your company. When you do convert from the pilot to a full system, you still don’t need to jump directly into a massive project. Taking a staged approach to development and rollout is not only less risky, but also often the best way to achieve success.

Typically, the best method for deploying a chatbot or virtual agent is to use an agile approach, starting small and scaling the solution over time. This could mean focusing on a particular area of content, a specific use case, or a key contact channel that will have the greatest impact as a starting point. Your vendor will collaborate with you to design a staged rollout based on your biggest pain points. This reduces risk because you are streamlining your efforts in a way that supports your identified KPIs. You can also take advantage of new insights as you go to improve the tool and tweak your plan to maximize on successes and avoid potential problems.

It’s a common misconception that conversational AI is always a high-risk investment for organizations, but one that shouldn’t keep you from implementing your own chatbot or virtual agent. Being a risk-adverse business is not a barrier to deploying a successful and valuable conversational AI project. These three steps can help you join other savvy companies in taking advantage of the proven, reliable benefits of this technology while minimizing your risk.

To make it easier for you and your organization to apply these three steps to your conversational AI approach, I’ve compiled them all into a single document which can be read, shared, and downloaded here: Conversational AI Doesn’t Have to be a Risky Investment.

Selecting the Right Conversational AI Vendor Makes All the Difference

By Chris Ezekiel, Founder & CEO

It’s been a tough year for every organisation and one that created a renewed, and often urgent, push for digital transformation projects. In their new ISG Provider Lens™ Intelligent Automation – Solutions and Services study, the experts at ISG found that the market for conversational AI has shown a steady growth over that time. This and other intelligent automation technologies are helping enterprises optimise costs and productivity while also enabling them to stay prepared for the future.

With conversational AI now at the forefront of many digital experience strategies, ISG evaluated 19 vendors based on the depth of their service offerings and market presence. I’m very proud that Creative Virtual is a Leader in Conversational AI, surpassing all other vendors with our company’s competitive strengths! The analysts at ISG found Creative Virtual to be a Leader based on our comprehensive solution portfolio and industry experience, emphasizing our long history of developing and delivering conversational AI solutions that provide real results.

Conversational AI

When it comes to implementing conversational AI tools to support your customers, employees, and contact centre agents, selecting the right vendor makes all the difference. This doesn’t just mean the technology; you must also consider the experience and expertise of the vendor’s team. It is the combination of these two factors that will set your project up for success.

I’ve talked before about how much the virtual agent and chatbot space has changed since I founded Creative Virtual in late 2003. What hasn’t changed over that time is Creative Virtual’s commitment to delivering the best combination of innovative technology and expert consultation and guidance to our customers. We strive to become a trusted partner to each of our customers, getting to know their organisation and specific goals in order to deliver customised solutions. We also use these close relationships to gather input for our R&D roadmap to ensure we continue to innovate in a way that will help companies tackle their real challenges and deliver real results, now and in the future. This is what allows Creative Virtual to be a conversational AI Leader today.

In our Leader profile in this ISG report, the analysts note: “Creative Virtual is a well-known and established brand for AI-enabled client support”. In fact, our very first enterprise-level customer is still a customer today, working with us continuously for over 17 years now. We also have several other organisations that we’ve been able to count as customers for at least 10 years. That level of experience and long-term collaboration is rare among vendors in today’s crowded conversational AI market, but extremely valuable.

The ISG Provider Lens™ is a great resource for anyone involved with selecting a conversational AI vendor to begin a new project or replace an existing, poor performing one. It provides:

  • An overview of the Intelligent Automation Solutions and Services market
  • Comparisons of conversational AI provider strengths, challenges, and competitive differentiators
  • Analysis of Creative Virtual’s product capabilities, industry expertise, and strategic partnerships

You can download a copy of the ISG Provider Lens™ Quadrant Report here. Our team would also love to show you our technology in action, and you can request a personalised demo here.

Congratulations to the Creative Virtual team on our recognition as a conversational AI Leader in this independent ISG report!

Customer Service Week Musings: How does a machine know if it’s wrong?

By Laura Ludmany, Knowledgebase Engineer

There are many comparisons dealing with the main differences between humans and machines. One of the recurring points is while humans have consciousness and morals, machines can only know what they are programmed to, hence they are not able to distinguish right from wrong unless they are provided data to make decisions based on that information. There have been many discussions on the self-awareness of robots, which is a topic as old as Artificial Intelligence, starting from Isaac Asimov’s three laws of robotics, continuing to the Turing test and nowadays AI ethics organisations.

One thing is commonly agreed – bots need to be ‘taught’ morals, and to achieve this there could be two approaches, both having their advantages and disadvantages. The first one contains a loose set of rules, but plenty of space for flexibility; this system could always reply to questions. However, it could also result in many false positives cases and could go wrong on many levels. The other would mean more rules and a narrower approach. The system could answer a limited number of queries, however, with very few or non-false judgements.

What does this mean from the customer service and customer experience (CX) view and for virtual agents answering real time customer queries? If we narrow down our conditions, bots would deliver the right answers at most times. However, they could not recognise many simple questions, making users frustrated. The same can happen with the loose set of conditions: the assistant would easily deliver answers but could misinterpret inputs, resulting again in annoyance.

To solve this problem, we must use a hybrid approach: an AI tool can only be trained appropriately with real-life user inputs. While we can add our well-established set of rules based on previous data and set a vague network of conditions, the bot will learn day-by-day by discovering new ways of referring to the same products or queries through user interactions. Half of a virtual assistant’s strength is its database, containing these sets of rules. The other half lies within its analytics, which is an often-overlooked feature. What else could be better training for a CX tool than the customers leaving feedback at the very time an answer was delivered? Conversation surveys are not only important to measure the performance of the tool. They are also crucial for our virtual assistants to be able to learn what is wrong and what is right.

Our approach at Creative Virtual to reporting is to follow the trends of ever-changing user behaviour. We offer traditional surveys, which measure if a specific answer was classified as helpful or not by the user and if it saved a call. Sometimes, the specific required action or transaction cannot be performed through self-service options and the customer must make a call, or else, the answer has been slightly overlooked and needs to be updated – for these cases there is a designated comment section, so users can express themselves freely.

We all know from personal experience, that we can’t always be bothered to fill out long or detailed surveys – we are on the go and just want to find the information we were looking for without spending extra time to leave feedback. This is typical user behaviour, and for this we came up with different options for our clients such as star ratings and thumbs up and down, keeping the free text box, to make the rating simpler for users. The solutions deployed are always dependent on the requirements and preferences of our clients, which are in line with the nature of their business and their website design. For example, financial organisations usually go with the traditional options for their customer-facing self-service tools, but internal deployments often have more creative user feedback options.

What if, during a conversation, a virtual assistant delivered the correct answer to five questions, but two answers advised the user to call the customer contact centre and one answer was slightly outdated? Does this rate as an unsuccessful conversation, due to three unhelpful answers? To solve this dilemma, we have End of Conversation Surveys, which ask customers to rate the whole conversation, on a scale to 1-10 and choose what they would have had done without the virtual assistant. As always, there is a free text box for further communication from the customer to the organisation. These surveys show high satisfaction levels as they measure the overall success of the conversation, which can have some flaws (just as in human-to-human interactions), but still can be rated pleasant and helpful.

Let’s take a step further – how can the virtual assistant learn if it was right or wrong if none of these surveys have been taken up by the user? Is this valuable data lost? Our Creative (Virtual) analytics team have levelled up their game and came up with a solution! During voice interactions, such as incoming calls to customer contact centres, there is a straightforward way to understand if the conversation wasn’t successful, even if it wasn’t stated explicitly, as the tone might change or the same questions might be repeated. But how can we rate a written communication with our customer? There has been a specific platform developed, which sits on the top of our previously described survey layers. This platform classifies the whole conversation, with a carefully weighed several-factor-system, which can be tailored to our client’s needs, containing factors such as if there has been more than one transaction, whether the last customer input was recognised by the virtual assistant, if there have been negative user responses recorded, etc.. The primary ‘hard’ indicators remain the user-filled surveys, so this is just a nice icing on the cake, as our mature deployments show over 80% of successful conversation rates.

With our proactive approach and multi-layer analytics tool sets, we can be sure that our virtual assistants will learn more and more about what is right and wrong, to increase the customer satisfaction level continuously. However, I think no machine will ever be able to answer all questions correctly, as this would mean that deployments have stopped being fed up-to-date real-life data. Our world is changing rapidly as are our user queries. These cannot be fully predicted ahead, just analysed and reacted to appropriately. As long as AI tools serve customer queries, they will always face unknown questions, hence they will never stop learning and rewriting their existing set of rules.

As we celebrate Customer Service Week this year, we need to recognise the role customers play in helping to teach our AI-powered chatbots and virtual assistants right from wrong and the experts that know how to gather, analyse and incorporate that data to help train those tools. Check out our special buyer’s guide that explains why experience matters for using this hybrid approach to create reliable and always learning bots.

Tips for Deploying AI Chatbots & Virtual Agents

By Chris Ezekiel, Founder & CEO

Chatbots, smart help, virtual assistants, virtual agents, conversational AI – there are lots of names for this automated, self-service technology being used today. Regardless of what you call it, the objective for including it as part of your customer service strategy is to deliver quick, easy access to information. How to select and deploy the right technology to do that for your organisation was the focus of the webinar I recently presented with Engage Customer.

In the webinar, Tips for Deploying AI Chatbots & Virtual Agents, I talked about some key questions to keep in mind when adding one of these solutions to your customer experience (CX) strategy. You want to ask: How can I ensure my chatbot or virtual agent is:

  • Providing accurate and personalised information?
  • Creating a positive and seamless experience?
  • Using artificial intelligence (AI) and machine learning in a reliable way?
  • Able to grow and expand with my digital strategy?

Deploying a solution that enables you to integrate with other systems and knowledge repositories is crucial to success. You want a solution that is backed by an orchestration platform that allows you to bring together all of the content sources and manage the natural language processing (NLP), intents and machine learning to keep the conversations flowing in a seamless, personalised way across customer touchpoints. You also want to be learning all the time from these conversations in such a way that the human content owner works alongside the machine learning component to provide the best possible customer experience.

I am a great believer that the best way to really understand the technology and why these questions are so important is to see the technology in action. During the webinar I shared a series of live demonstrations of solutions currently deployed for companies in the telecommunications, financial services and travel sectors. I selected these examples because they showcase how the technology is being used across a variety of customer touchpoints and with various integrations to deliver customised, seamless experiences.

To help companies get started with selecting a new chatbot or virtual agent – or finding a solution to replace a poor performing tool – I ended my presentation with four important tips:

  1. Work with an experienced vendor
  2. Select a reliable technology
  3. Look for flexible integration options
  4. Evaluate the orchestration platform

I went into detail on each of these tips and shared some specific questions to ask during the evaluation process to ensure you are deploying a technology that will work for your specific goals and use cases. It’s important to consider how the solution not only fits with your immediate plans but how it will evolve and grow with your company and strategy.

My thanks to Steve, Katie, Dominic and the entire team at Engage Customer for hosting this webinar! You can watch the webinar recording on-demand to see the demos and find out more about my tips and best practices for deploying AI chabots and virtual agents.

If you want to learn more about Creative Virtual’s experience and technology or are interested in arranging an individual workshop, contact us here. The Creative Virtual team is ready to help you get started on your successful virtual agent strategy.