Welcome to the Creative Virtual blog! Here we cover all of the hot customer experience topics in mobile, social, web, IVR and so much more. We also share company announcements, details about upcoming events and give you a peek into what happens behind the scenes.

Our regular contributors have over 90 years of combined experience specifically with customer engagement, natural language virtual assistants and knowledge management. When you add in the experience of our guest authors and the other Creative Virtual team members who occasionally contribute, there is no better place to get expert insight.

And don’t forget to subscribe below to get all of our new posts delivered straight to your Inbox!

Talking Conversational AI

By Chris Ezekiel, Founder & CEO

In 2003, motivated by a vision of a time when “conversations powered by computers would transform the world”, I founded Creative Virtual. Over these past 20 years we have constantly been at the forefront of developing innovative conversational AI solutions that address real business issues and deliver real business value. Our V-PortalTM and V-PersonTM products have won many awards, are rated highly by analyst houses, and have benefited businesses in 25 countries.

The Gluon release of our V-Person Technology, which we launched globally on 16 May, continues our history of innovation and bringing to market new powerful solutions that deliver real business value. It also inches us closer towards the reality of the vision that we dreamt about all those years ago.

A transformational product, Gluon has raised the bar and shows that it is possible to create responsible enterprise-grade conversational AI solutions that put organisations in control, enable accuracy of content curation and sharing, and deliver accurate and consistent real-time multi-channel customer and employee experiences.

The microservices architecture of Gluon gives organisations a higher level of agility, workflow flexibility, accelerated integration capability, and enables scalability, supporting the composable enterprise model favoured by companies.

This architecture also provides stronger security, makes our system much easier to use, and enables organisations to deliver richer and more powerful customer and employee experiences through the seamless delivery of accurate, relevant content across multiple channels in real-time.

Creative Virtual’s experienced R&D team have been actively examining and testing large language models (LLMs) for many years. With Gluon, we have made LLMs configuration easier; providing native support in V-Portal and enabling organisations, with one click, to choose both their preferred language model and deployment method, whether that be on-premise, in a public or private cloud.

That’s not all. Because of the team’s unparalleled knowledge and experience in conversational AI development, we can provide our customers with the ability to mitigate the potential data privacy and protection risks associated with LLMs, through our innovative engineering and putting in place measures, processes and systems, that still allow our clients to reap the maximum benefit of these models.

The ground-breaking technological capabilities of the Gluon release with its advanced features and functionality, and redesigned user-interface, are matched by the equally impressive business value it can deliver. The time, efficiency, productivity and cost savings are measurable, and greater personalisation capability provides for an enhanced reputation through better customer and employee experiences.

Operationally, the Gluon release simply becomes part and parcel of an organisation’s technology ecosystem, integrating seamlessly into its current tech landscape and enabling improved utilisation of current assets.

The Gluon release of our V-Person technology marked a momentous day in our history. It was also a day that has advanced conversational AI solutions that deliver real business value. With the continued growth of computational power, technological innovation, and human ingenuity, the time when “conversations powered by computers are transforming the world” is now, and will also be tomorrow and the future.

Whilst we are constantly driven by our vision which, excitingly, we continue to see unfold before us, we are equally motivated by our belief in “The Science of Conversation” which has been our strapline since day one.

The ‘science of conversation’ is at the heart of our Gluon release, blending, in a seamless way, the art of conversation and the rigour of science and engineering to herald a new era in conversational AI.

Please contact us if you would like a demonstration of our Gluon release.

APAC demonstration

EMEA demonstration

US demonstration

Human Hallucinations

By Chris Ezekiel, Founder & CEO

There are so many organisations having their say on ChatGPT, but not everything that is being said should be taken at face value.

Companies wondering how they will be able to best leverage large language models (LLMs) – whether that be GPT, LaMDA, Galactica, Megatron-Turing or any other model – need to ensure they separate fact, fiction and fantasy.

We’ve been innovating in the conversational AI space for nearly two decades and as technology continues to advance, we are constantly asking ourselves serious questions about how it may, or is, disrupting the value we can add. This drives us to innovate, so that our customers can maximise real business value from our solutions.

The hot topic of LLMs is not new to us. Our team has been exploring and investigating them for about two and half years now.

One of the enterprise-level features of the Gluon release of our V-PersonTM Technology, which we launched on 16 May, is native support for LLMs. Like all technology, the value lies in how it is used, and this is also the case with LLMs.

An example of how we are using LLMs is theme matching. This has come about following many conversations with our customers and partners and listening to their concerns, challenges and requirements. This use of LLMs is value adding; ensuring accuracy of responses in real-time, at a significantly lesser cost and greatly reduced time and effort compared to similar possible methods.

When going down the LLM route, the questions companies need to be asking are:

  • What is the conversational AI provider able to deliver, that can’t simply be done by using LLMs such as ChatGPT directly?
  • What value is the conversational AI provider adding to your business through real customer use cases they are showing?
  • Can the conversational AI provider back up and demonstrate what they are saying?
  • What is the data privacy, security and sovereignty safeguards that the provider has in place?

The reason these questions must be asked is because simply using ChatGPT to do something doesn’t mean it adds any value. Companies need to look for a provider who delivers conversational AI solutions that address real business challenges and are designed to deliver real business value.

Let me give you an example of what I saw recently on a webinar. Presenting an example of a transactional chatbot, the company demonstrated how easy it is to build a currency converter using ChatGPT.  Good so far.  But:

  1. How many of us have been using Google to do that for as long as we can remember? and,
  2. This is not as a transactional chatbot, and to call it one is disingenuous.

A transactional chatbot that adds value to a business would be one that, for example, would be able to add money to your travel card. Having a chatbot able to execute the task deflects this common transaction away from the contact centre. This is of value to a business as it allows them to free up agents to deal with more complicated customer issues that require a live agent.

Companies should also feel confident to challenge their conversational AI provider to substantiate what they are saying.  We hear and read a lot about responsible AI, but you need to insist that your provider can show you exactly how they going to deliver this, the measures they have embedded and processes they are following. Responsible AI starts with responsible humans.

Similarly, companies should be confident that their provider is fully versed in data protection, private and sovereignty, and understands the language model infrastructure and how it uses data. Organisations must ask what is happening to the data, and if queries put to ChatGPT are being processed on overseas infrastructure how does this affect data sovereignty or other data protection issues.

The pace at which LLMs are advancing means that the best conversational AI provider is one that is extremely agile in terms of product development. Recently we added the capability to deploy language models locally, both on-premise and in a private cloud, to alleviate the concerns of some clients about sending data to globally deployed models.

Our approach to product development has always been in collaboration with our customers and partners. This ensures that our innovation is driven by solving real business challenges and enables us to deliver real business value.

Unfortunately, there are a lot of unsubstantiated claims about ChatGPT, LLMs and AI but with the right provider, conversational AI is a great tool for businesses looking to deliver better customer and employee experiences.

The excitement about ChatGPT is causing human hallucinations, so ask those questions of your conversational AI provider.

The changing face of chatbots

By Maria Ward, Knowledgebase Engineer and Account Manager

We would all do well to heed the warning “with great power comes great responsibility,” when it comes to ChatGPT.  Today, industries of all sizes are making use of AI chatbots, voicebots and virtual assistants to great effect in delivering better customer and employee experiences. Consumers the world over have gotten used to and accept interacting with these ‘human-imitation’ tools to answer their questions, help them complete tasks, and guide them to resolve issues.

The face of AI chatbots, voicebots and virtual assistants is continually changing as technology advances, customer and employee demands and expectations increase, and how we communicate evolves.

Right now, there’s a lot of hype about the capabilities of ChatGPT and the large language models (LLMs) such as GPT-3 and GPT-4 that feed them.  This is not surprising given the far-reaching impact the technology can and is having across industries, professions, and social development.  People from all walks of life, from writers and developers to traders and architects, are saving time and improving efficiency by using the open AI to produce content.

The generative abilities of ChatGPT are astonishing to observe.  The capability of AI  has developed at a rapid pace over recent years, and we are at an inflection point where things are only going to develop at an ever-increasing pace!  The power of the latest LLMs is undeniable.  How this power is used practically in the design and implementation for conversational AI solutions will either add or detract from the customer and employee experience.  This is where businesses need to ensure they are speaking with experts in conversational AI.

ChatGPT in business settings

You can ask ChatGPT to give answers based on company documents, websites, and other sources of information.  When a question is asked and the answer available from the provided information source, the results are pretty reliable (but not 100%) at giving the correct information.  However, it is a recognised risk that ChatGPT will “hallucinate”, and do so very convincingly, when it cannot find the information from the source material.  This could cause significant reputational and even financial damage to a business.

So where does that leave organisations that are about to embark on their conversational AI journey and those who have existing solutions?  How do they ensure that their chatbot is powered by the latest technology and accurately represents their company?

Most organisations will not want to risk wrong information being provided to customers, however small that risk is.  Indeed, often it will be essential for public facing content to be signed off by legal departments.

Knowledge Management is king

The bottom line is that content is key when it comes to providing the best conversational AI experience. Managing the information that ‘feeds’ the chatbot is paramount for delivering the best experience to customers and employees.  It is, after all, this source information that enables accuracy of conversations.

The knowledge management system of Creative Virtual’s V-PortalTM platform allows businesses to embrace the power of LLMs.  And they can do so with the confidence that the ultimate responsibility for the content is in the hands of humans.  Putting humans in control ensures that no hallucinogenic (or incorrect) answers are provided.  Businesses remain in charge of what information their chatbot can and does share.

Knowledge Management System in V-Portal

V-Portal is a platform where you can manage answers for all channels, such as web, voice, social media, and agent assist, as well as enable language capabilities, and include personalised information.  The platform supports LLMs, including native support for GPT, which enables businesses to deliver greater personalisation.

For example, using ChatGPT within the V-Portal platform you could enable a single signed off answer to be modified to change length, tone of voice (etc.) to suit the needs of the customer, based on the channel they are using. Integration with CRM systems provides knowledge about that customer to deliver an even more appropriate personalised response.  This would be a minimal risk scenario using ChatGPT, that would enable organisations to deliver a new level of personalisation.

Content creators can be assigned specific permission levels according to their needs by granting access to only the content relevant to them. Tasks can be set against all content types giving you the power to set actions and assign these to specific users, for example for content review.

Workflows and approvals are also an integral part of the platform, allowing control over what content makes the final cut. And, with a full version history you always have visibility of what was changed and when.

A fundamental part of the platform is flow design.  This allows you to build processes to hand-hold customers through the decisions they need to make to complete their journey.

Thinking about it, V-Portal serves as the engine that brings together the great power of AI and the human intervention and judgement needed to ensure that great responsibility is maintained.

There’s more to GPT than the ChatGPT headlines

By Chris Ezekiel, Founder and CEO

AI attention over the past couple of months has been quite astounding.  Especially considering that it has been part of our every-day lives for so long already.  Face-ID to open our mobile phones – AI.  Social media – AI.  Voice Assistants like Siri and Alexa – AI.  Route mapping – AI.  Its attractiveness and almost universal application is undeniable, with global spending on AI-centric systems expected to reach $154 billion this year according to IDC.

Used extensively across every industry, delivering huge benefits and rewards, AI has been transforming businesses and the workplace for decades. In healthcare it is used in radiology results analysis and robotic assisted surgery, in banking it is the core of fraud detection, and in retail and e-commerce it is the behind personalisation, integral to effective inventory management and powers customer service chatbots.

AI used in education is enabling personalised learning and in transportation it has made driverless vehicles possible.  Businesses of all sizes are using it to deliver better customer and employee experiences with more human-like chat and voice bots, to strengthen cybersecurity, and to build more effective workflows and operations.  And these are just a few of the myriad of applications.

At Creative Virtual we have been at the forefront of helping businesses build better customer and employee experiences with innovative conversational AI solutions for nearly 20 years. Since inception we have always ensured that we practice ethical technology. We see this as a fundamental responsibility that we take seriously, and it is something that our customers expect.

There has always been adoption hesitancy when it comes to new technologies.  Issues such as job erosion, privacy, surveillance, behavioural manipulation, fake news, changing labour force/job erosion and bias are just some of the topics that come up in impact analyses of advanced technologies.

Whilst practising ethical tech comes naturally to Creative Virtual, we are not blind to the fact that there are bad actors exploiting technology and many questions on the wisdom of a concentration of power among a small elite of tech giants, with no regulatory oversight.

It is critical that the business models of companies in a position of power have robust systems in place that take account of legislative and ethical responsibilities relating to the privacy, security, discrimination and misinformation and other issues of today.

Technology has always moved faster than legislation and regulation, most recently for example we have seen this with Uber.  This is also the case in relation to AI, with questions being raised on whether self-regulation is strong enough to safeguard the rights of individuals, protect, promote and support cultural diversity, halt the spread of mis/false information, and ensure adherence to data and privacy legislation.

The continual questioning and interrogation of the social and economic impacts of new technology must happen concurrently with tech advancement and progress, with one not stopping the other.

It seems, from the media coverage at least, that ChatGPT took the world by surprise when it burst onto the scene back in November 2022.  Almost overnight, the world became obsessively captivated by AI.

The focus by mainstream media on AI technology might be new, triggered by ChatGPT, but it has been around (admittedly not as powerful or impressive – but technology is always evolving and improving) and used by businesses for quite a long time.

Nevertheless, it is being seen as ‘new’ in terms of massification and consumerisation which has led to a lot of AI hype filling print, digital and broadcast media platforms.  It has been simultaneously sensationalised, in some cases demonised, and satirised.

Most recently, it has also been politicised. We have all read that countries such as Russia, China, North Korea, Cuba, Syria, Iran and Italy have banned ChatGPT.  It should be noted that Italy is a bit of a different case to the other countries. The Italian government has banned the application because they are concerned with privacy and data, stating that “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users.”

Russia, China etc. have banned ChatGPT because they either have strict rules against the use of foreign websites and applications, full on restrictions in the use of even the internet, or they have strict censorship regulations.

ChatGPT is also on the ‘watch-list’ of several countries. France, Germany, Ireland and the UK have indicated that they will be monitoring the use of the application closely for “non-compliance with data privacy laws”, and they have also raised concerns about algorithmic bias and discrimination.

It is not only governments that are questioning AI, a moratorium on the development of AI has been proposed by tech celebrities Elon Musk and Steven Wozniak.  They have put their signatures to an open letter, along with prominent and respected AI researchers such as Yoshua Bengio, Stuart Russel and Gary Marcus, asking for “… all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.  This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Of course, the concerns voiced by governments, individuals and the industry, are valid.  They are centre stage in the ongoing debate of the ethics-morality-technology-societal quadrangle. This discussion should continue. However, proposing a pause or ban on technological advancement is not a sensible or necessary response.

As previously mentioned, Creative Virtual’s approach when building and deploying conversational AI solutions is from a position of responsibility, deploying tech in an ethical and moral way. This includes understanding the intended outcomes of the conversational AI technology solutions we build and deploy, and to debate potential unintended outcomes so we can mitigate these.

For example, it is a known that GPT-3, 3.5 and even the latest model GPT-4 are not fully reliable (humans are not reliable all the time either!) and there is no guarantee of 100% accuracy.

Yes GPT-4 is more accurate than previous versions, but it does still “hallucinate”, can give inaccurate information and harmful advice. Businesses must be able to mitigate the risks this poses to avoid financial and reputational damage and be in control of the information their company is sharing.

To ensure organisations are in control, Creative Virtual supports large language models (LLMs) including the latest versions of GPT, but we remain uncompromising on not transferring total authority to machines.

Our conversational AI solutions provide a signature blend of AI and rules-based natural language processing (NLP), with the AI component compatible with workflow functionality to allow for customisable configuration options. It also means that our systems improve continuously in a reliable way that meets the needs of an organisation.

At the same time, natural language rules can still be used to enable control over responses in instances when AI answers are insufficient. Our blended approach ensures accuracy, enables the resolution of content clashes, and delivers very precise replies when needed.

This level of enterprise-grade functionality differentiates Creative Virtual’s conversational AI platform from all others on the market today. Providing this high degree of control over the AI is critical for businesses. Organisations can be confident in the accuracy of what they communicate to customers and employees because we enable human judgement to be applied to the information created by AI.

Recent commentary about ChatGPT has highlighted examples of its imperfections as well as potential immediate and longer-term social implications. At Creative Virtual we make it possible for our customers to mitigate these risks whilst still enjoying the business benefits of large language models, specifically GPT-3.5 and 4.0 today.

Using our V-Person technology, real business concerns regarding the security, data, privacy, and accuracy aspects related to information sharing are moderated, and organisations retain full control over AI output.

We are already working with customers and introducing LLMs as part of their conversational AI solution for tasks to deliver better customer, employee and contact centre agent experiences.  After identifying specific use cases we are piloting a number of GPT capabilities that are changing the playing field, including vector matching, summarisation, text generation, translation, clustering/analysis, Q&A preparation, and using generative AI.

Implementing LLMs requires experience and expertise, especially given the rate at which AI is developing.  In conversational AI, knowledge management is critical. Creative Virtual’s orchestration platform – V-Portal – supports LLMs, enabling businesses to maximise the benefits of the latest technology and can do so safely, securely and with confidence.

Our V-Portal platform combines knowledge management with workflow management and user management, supports multiple versions of answers for a single theme which gives granular control over the responses given, and allows for optimisation for individual channels.  It also has the capability to manage multi-lingual solutions within a single knowledgebase.

Organisations using our V-Portal platform have options for presenting users with a specific response based on a variety of criteria, including channel, authenticated user profile or selected language.  The platform also supports the use of rich media such as diagrams, images and videos in addition to text and hyperlinks within answers.

The flexible architecture of V-Portal enables seamless integration of our conversational AI solutions into existing processes and technology infrastructure, ensuring business continuity and as a cloud-based solution upgrading to take advantage of the newest technologies and stay ahead of the competition could not be easier.

AI will continue to capture headlines for many years to come.  The good, bad and ugly will be debated.  How society, businesses and individuals choose to use AI is a big part of the positive impacts it will have.

As a business tool, conversational AI solutions powered by the latest in AI technology advances can supercharge employee, customer and contact centre agent experiences, whilst also delivering cost and efficiency savings, and improving productivity.

It’s all about having the right conversational AI partner who understands the technology, business challenges and responsibilities, and can build and deploy solutions that meet the real needs of business.

Contact us to find out more on how LLMs can help you deliver better employee, customer and contact centre agent experiences.

Multi-Channel is very 2010’ish – Bots are multi-purpose these days.

By Björn Gülsdorff
In the annals of KAUST (King Abdul University of Science and Technology), the 21st of May 2020 is marked as the day of the soft launch of VITA (called KAI back then), the university’s VA (Virtual Assistant); knowledgeable in many things IT related.

VITA asks for username or KAUST ID at the beginning, but lets you chat on even without that. It connects to an Active Directory (AD) to pull more user information and helps with all kinds of IT issues. When things get serious, VITA creates a ticket on the user’s behalf – but only if the user authenticates using the university’s login tools. When things go wrong, VITA hands over to the live chat agents, using Creative Virtual’s very own Livechat.

With on-the-fly login, AD integration, ticket creation and Livechat handover, VITA ticked the boxes and had very successful conversations.  We then added a full-page version and a PWA (progressive web app) so that people could bookmark the link to VITA on their devices. And deep links into the Knowledge Base allowed the Service Agents to send links to VITA’s answers in emails.  As a natural extension, VITA was made available on Facebook Messenger.  To improve CX, the UI was connected to KAUST’s face recognition system. A user can now identify themselves using their cameras to take a picture, which VITA sends to the face recognition software.

While these extensions were delivered, work had started for another department: Facilities Management (FM). In May 2021, Mr. FIX’T was launched, supporting on-site residents with a variety of FM matters, from air conditioning to kitchen appliances to waste management.

During the FM project, we built on the existing Active Directory integration and added WhatsApp as a channel for both IT and FM, included document upload, integrated with another back-end ticket system, and created a dedicated queue for FM on our Live chat back end.  We also enabled live chat agent to use a personal image in the chat UI. The new content was added to the same installation, but separated from IT using V-Portal’s built-in “Business Area” concept, whilst V-Portal’s “Channel” support is used to manage the now 4 different delivery channels (Web, PWA/Fullscreen, WhatsApp, Facebook)

A short while after the launch of the FM solution, we were then approached by the library department. Their VA, called Labib, went live on the library website end of February 2023. Whilst being mainly an FAQ machine and only conversing on the web for the time being, it comes with the specialty of escalating to the library live chat rather than our own.

A recent trip to CCW 2023, Berlin

By Björn Gülsdorff

CCW is the “international conference and trade show for innovative customer dialogue”, which is a bit bulky a title, but much better than the original ”CallCenterWorld” which is no longer a good fit. Unfortunately, this is now easily confused with ContactCenterWorld, also called CCW. But I managed to get to the right event to support our German partner SOGEDES in matters related to Conversational AI.

One very non-technical but nonetheless important takeaway from CCW is that Real life ain’t dead. CCW sported a hybrid concept, with all talks and product presentations available through live streams (and recordings after the event).  And still, many people came to Berlin, and I think that nearly everyone who had come was ready for business. The quality of the conversations at the SOGEDES booth was outstanding.

Unsurprisingly, ChatGPT was a big topic. However, although it was hot in talks and booth side chat, it was evident that few real applications had been built yet, except for the obvious use cases of summarising texts. This was not only due to the short timeline between the launch of CHATGPT and CCW, but what I heard many people saying was that ChatGPT is not ready for customer dialogue. This is correct of course (as GPT4 was not out then) but I was still very surprised how educated and relaxed people were about it.

I had expected a buzz similar to the machine learning hype some years back. At the same time, ‘AI’ has become a commodity. Everyone has it – because everything is now called AI. Similarly, but to a lesser extent, Conversational AI has lost leading edge appeal and simply become something that’s used for automated conversations i.e., chat bots – but those, as a word, have fallen from grace.

The top two top – Agent Assist and Voice Bots

After years (decades?) of promoting Digital Self Service, voice is still a strong channel and companies are now looking to automatise calls and are no longer avoiding this. The handover/routing of calls from bots to agents is an important ingredient in the requirements.

In a way, organisations are now looking for something that Creative Virtual has been suggesting and recommending since its founding. That is, virtual agents are a part of an overall conversational strategy and there should be collaboration between human and silicon agents.

We have been delivering solutions that enable human/digital agent collaboration for decades.  In fact, “agent assist” is the name of some of the internal projects we deliver to our customers that, well, assist agents, so they can provide a better service. Our recent work with smart and audio-codes is also testament to our credentials as leaders in conversational AI, and the best choice for customers.

It is worth saying, however, that this new focus on voice brings with it very simplified conversations, or at least very rigidly structured conversations, something that is no longer prevalent in text.  You can’t get very complex in a phone conversation, you can’t show images to let users choose the right product, and you can’t play a video to explain something.  To deliver the experiences customers expect, organisations do need to ensure there is integration, with seamless interfaces for these simple flows.

The Overlooked Feature of GPT : Vectorisation

By Olaf Voß, Lead Application Designer

These days everyone is stunned by the generative power of the GPT models, including myself. However, today I want to discuss a GPT feature that is largely overlooked in media coverage: the embeddings endpoint in the OpenAI API. This feature ‘translates’ any text into a 1536-dimensional numerical vector. Personally, I prefer to use the term ‘vectorisation’ instead of ’embedding’.

The idea to turn individual words into vectors is about 10 years old now. These word vectorisations were trained on large corpora – or what we thought were large corpora 10 years ago. They are useful because the way words are distributed in the vector space represents relationships between them. The most famous expression of this concept is probably the equation king – man = queen – woman, which holds approximately true when creating word vectors with tools like GloVe or Word2Vec. These word vectorisations have been in use ever since, forming the basis for much of the progress in machine learning for language-related tasks.

Now, GPT offers the same level of analysis for entire texts. It’s not the first model that can do this beyond single words, but its high quality and affordability make it highly attractive. If you experiment with it, you can quickly see how useful it may be. For example, a question and its answer are matched to fairly similar vectors. Additionally, a text in one language and its translation into another will be mapped to close-by vectors.

This vectorisation can be used for anything that can be done with vectors. Text comparison and thereby text search is one obvious use case. We at Creative Virtual will be using it this way in our upcoming Gluon release for intent matching – still keeping rules as the fallback option if and when needed. Another way we are already using it is for text clustering. Finally, you could use the text vectors as the input layer for a neural network and train it for whatever task you want, thereby ‘inheriting’ many of GPT’s text understanding capabilities.

So, if you have access to the OpenAI API and if you are running out of ideas for what to do with the chat endpoint, give the embeddings endpoint a chance. Vectorise away!

An Experience Designed for Your Customers

By Chris Ezekiel, Founder & CEO

Every month I write a column for Wharf Life, a fortnightly publication that’s available for free around Canary Wharf, Docklands, and east London as well as in an E-Edition online. Titled Virtual Viewpoint, I use this column to share my thoughts on a variety of technology related topics as well as my recent experiences. My latest Virtual Viewpoint was about grocery shopping at the local Canary Wharf Waitrose, something I’ve done regularly – and without much fanfare – since the store opened over 20 years ago.

For all those years I was always baffled by the wine bar in the middle of the food shopping court. On many occasions, I would wonder why on earth people would want to combine the drudgery of picking up their spuds and toothpaste with a glass of wine. I was especially perplexed considering the abundance of lovely bars in such close proximity. Yet, it’s always appeared consistently busy and continued to be a fixture of the store despite the updates and renovations over the years.

The penny finally dropped the other day after my wife and newborn son arrived home from the hospital, and we were suddenly thrust into the two-under-two club. A few days into the lovely chaos of both chasing a toddler and caring for an infant, and suddenly a trip to Waitrose felt quite exciting!

With a spring in my step, I put on my noise cancelling headphones with some relaxing music and the experience felt completely different to normal – more like a trip to a health spa. Oblivious to the mundanities of my shopping trip, I suddenly stumbled upon the bar. And that’s when the light bulb moment happened. I was in my personal spa, and the idea of a glass of champagne amid the shoppers made perfect sense.

Every company has a target customer base, and it’s no secret that designing your experience for your target customer is good for business. Yet within that target group there are a variety of subgroups that have different needs and expectations. Those variations could be based on age, location, access to technology – even number of children! It’s important that you consider these differences when making customer experience (CX) decisions.

As the leader of a conversational AI company, it shouldn’t come as a surprise that I am a huge supporter of implementing technology to improve your CX. But I am also a huge supporter of keeping the human at the centre of all CX decisions. While Creative Virtual’s V-Person™ solutions successfully automate digital customer service, forcing all customers to only self-serve with a virtual agent is never a good CX decision.

This is just one reason why it’s key to consult with experts on your CX strategy. This group of experts should include those experienced with your industry, with the technology, and with your identified use cases. It should also include your customers – the true experts on their experiences with your organisation, products, services, and employees.

Go beyond the typical customer surveys and dig into rich data like conversations with your virtual agent and contact centre. See what customers are actually asking, where they are experiencing pain points, and what they really love. Look for ways you can make experiences easier and more personalised. And be sure to take into consideration the varying preferences and needs of your whole customer base – get insights from both the carefree dad of one young child and the sleep-deprived parent of two-under-two!

If you’re local to Canary Wharf and want to discuss customising your CX with conversational AI, I’d be happy to meet you at Waitrose for a glass of wine and a chat. You can also arrange a session with one of Creative Virtual’s experts around the world by contacting us here.

A great resource if you’re evaluating different conversational AI solutions is the 2023 Chatbot Buyer’s Guide. It includes a technology comparison chart that can help you determine the specific functionality you’ll need to design the right experience for your customers.

A More Personal Personalized CX

By Mandy Reed, Global Head of Marketing

The start of a new year always comes with a slew of business predictions from experts, and 2023 has certainly been no different. I think we have learned to take these predictions with a grain of salt, considering them as we plan for the future but not necessarily fully embracing all of them. This is particularly true after the upheaval and uncertainty the world has experienced over the past few years.

Sometimes though we come across a prediction that feels very on the money. This year, one of the customer experience (CX) predictions for 2023 falling into that category is that ‘Personalization will get More Personal’. When you consider recent CX research indicates a personalized experience is important for customers in their purchasing decisions, it makes sense that businesses will work towards this in order to improve customer satisfaction. But what does it mean to make personalization more personal?

I like customer service expert Shep Hyken’s take on this in his blog post, Don’t Just Personalize the Customer’s Experience – Individualize It. In a nutshell, individualizing the experience is taking standard personalization one step further in order to create a unique – but not creepy! – engagement with the customer. This can be applied to marketing, sales, and customer support.

Personalizing the customer’s experience can mean a lot of different things to different companies and in different scenarios. I was reminded of this last week when I received an email from Amazon with a product recommendation. The recommendation was based on a recent product search and comparison I had done while logged into my account, so in that way it was personalized. However, I had ended my search by making a purchase, so it didn’t feel very personalized to have Amazon suggesting I buy a product I had just purchased a day or two before.

Sometimes those of us in marketing and sales are guilty of spinning the truth a little bit. Would I market that Amazon experience as personalized? Absolutely!

But was it really individualized to the true customer journey? Maybe not so much. . .

Back in 2013, V-Person™ technology became the first virtual agent offering to implement personalization – a recognition acknowledged both by the Patricia Seybold Group in their 2014 technology review and by the analysts at Gartner when they named Creative Virtual a 2015 Cool Vendor in Smart Machines.

For nearly a decade now, we’ve talked about our virtual agents and chatbots delivering a personalized self-service experience – with no marketing spin required! In this context, a personalized experience has always also been an individualized experience. It’s not just personalized because the user is greeted by name or is presented with relevant information based on the webpage from which the chatbot was launched. Users are given responses that are specific for them based on their account, subscriptions, invoices, member status, purchase history, location – the list goes on and on.

Over the years both our conversational AI platform and the technology that enables this type of personalization have improved, opening up additional possibilities and creating more seamless user experiences. The key to delivering a truly individualized experience through a conversational AI tool is integration. Without flexible integration options, you’ll never be able to make your personalization more personal.

Today personalized virtual agents are delivering more value to organizations than non-personalized solutions. They increase digital containment and improve user satisfaction by delivering comprehensive, tailored self-service experiences. They can also help individualize human-supported channels when deployed internally as agent assist tools.

It’s clear that technology is an important tool in creating more personalized customer experiences, whether that be on digital channels or in-person. When it comes to using technology to take those experiences from simple personalization to being individualized, the solution and vendor you choose is vitally important.

A great resource for any organization looking to purchase a conversational AI solution that will make personalized engagements more personal for customers or employees in 2023 is the new Chatbot Buyer’s Guide. It includes some important insights into the types of personalization you may want to consider implementing and what features and functionality the conversational AI platform you select needs to deliver that personalization successfully.

And if you’re interested in learning about how V-Person delivers a more personal and individualized experience, arrange your own customized demo with an expert member of our team.

Will 2023 end up being the year that personalization gets more personal? Only time will tell. But with the technology available today and the clear business benefits individualized experiences deliver, it would be foolish not to make this a strategic initiative this year.

​Generation AI: Growing up side-by-side with our silicon-based contemporaries

By Olaf Voß, Lead Application Designer

I was born in 1966. That means I’m usually sorted into Generation X. But these days, looking back at the past 57 years, I think we should really rename it to Generation AI. It has been my generation having witnessed AI from its infancy to the breakthroughs we’ve seen in the past few years. And with a bit of luck most of us will witness how AI will be reshaping our societies – for good or bad – in the next 20 years.

So let me give a recount of my encounters with AI throughout the decades.

There’s no way around it: I have to start with ELIZA, which Joseph Weizenbaum developed in the year I was born. I was too young to be aware of this when it was new of course, and even less aware that chatbots one day would play a major part in my professional life, but merely 16 years later I had access to Commodore computers at our computer club in secondary school. The ‘large’ one had 16 kb RAM AND a floppy drive! And we had an ELIZA clone running on them. But I admit I didn’t spend much time with her, I was far too busy with freeing princess Leia in an early text adventure or writing my own very simple games.

​I had my first chess computer at some time around 1980. It could analyze to a depth ​of 5-6 moves. I was an OK, but not great, player and I won against it when giving it up to 30 seconds thinking time and lost at above 2 minutes. In the early 90s I became a club player and soon after I didn’t have the slightest chance against the chess programs of that era even with 10 seconds thinking time. No need to be embarrassed about that I guess, since Gary Kasparov lost against Deep Blue in 1997.

Around 2005 I started playing Go. I was a convert from chess, and by that time I was used to having great chess programs available for training and analysis. Go has a much greater branching factor than chess and is much less suitable for static board state evaluation. With the available technologies of those days, programs could play Go at a mediocre level at best. Well, I still lost to them, but they were not strong enough to rely on their judgement. At that time most Go players, including myself, (apart from thinking that their game is much better than chess) thought it would take at least 50 years until computers could crack Go. It took about 10.

​How did they succeed​ so quickly? Deep neural nets. I read about those first as a university student in the 80s and was thrilled. I played with them a bit on the first computer I owned, an Atari 520st. I quickly thought about applying them to chess. My ideas were not very far from what is done in that field today, but of course I hadn’t heard about reinforcement learning at that time. I very much like to believe my ideas were extremely clever and would have worked. Alas, we’ll never find out, because it was clear very quickly that the hardware (especially mine!)  at that time was totally inadequate for tackling this problem.

​With what I’ve given away about myself so far nobody will be surprised to hear that I ended up becoming a software developer. Around the turn of the millennium I started to work on chatbots. We must have been one of the first few companies worldwide to tackle this commercially. ​At that time I was reluctant to say that I was working in AI. We were using pattern matching, and even a pretty simple form of that. Of course pattern matching IS an AI technique, but I was aware that with our focus on building chatbots that were useful for our customers in their restricted scope and on doing that efficiently, what we did would have bored any AI researcher. I wasn’t ashamed of what we were doing – quite the contrary: I was proud about what we could achieve with our pragmatism. I just wanted to avoid the pointless discussion if what we did was ‘real’ or ‘interesting’ AI.

​Fast forward another 15 years or so.​ Word embeddings came up and made it possible to tackle natural language problems with artificial neural nets. So I welcomed those back into my life. Only now the hardware was somewhat better plus I got paid for playing with them. Heavens!

Then comes 2020 and GPT-3. That was mind-blowing. I’ve heard people characterising deep neural nets as ‘glorified parameter fitting’. And sure, parameter fitting is all there is to it. But these parameters, each by itself just a dumb number, let something pretty astonishing emerge. I am not an expert on these topics and I am not even sure how much sense it makes to compare human and artificial intelligence. But sometimes I feel provocative and want to ask how we can be sure that our own intelligence is more than just ‘glorified synaptic strength fitting’. Once again I think the discussion about ‘real’ intelligence is far less important than considering what can be done. And, since there’s so much more that can be done today, how that will change our world.

Apart from being fascinating and super-relevant to the field I’m working in, GPT-3 is also a lot of fun of course. I remember a conversation I had with it in the early days, before OpenAI put the brakes on it to avoid harmful responses. (Which I applaud, in spite of it spoiling some fun.) I asked it  – actually before the start of the war in Ukraine – for a couple of suggestions about ‘how to achieve world peace’. One of the suggestions was: ‘Kill all humans.’ Well yes, job done … I’m still glad you are not yet in charge, you know!

I want to mention two more recent developments, even though they relate to me personally in a tangential way at best. Being a physicist by education I follow scientific developments closely, which brings me to AlphaFold 2. When I was 4 years old, the first successful DNA sequencing attempts were made. In 2020 AlphaFold 2 predicted and published 3D structures of thousands and thousands of proteins based on DNA sequences alone. Another loop closed during my lifetime. I make the prediction that in 2035 more than 50% of all newly approved drugs will have been developed using the results of AlphaFold or its successors at some stage in the process.

The second one is CICERO. As an avid board game player I reluctantly admit that I have never played Diplomacy, though I did play similar games like Risk or Civilisation. Diplomacy is a conflict simulation game in a WW1 scenario. It involves tactical moves on the board and lots of diplomacy around it – pacts – betrayals – revenge. CICERO can play this game on par with human experts. Apart from making clever moves on the board – easy-peasy these days for AI – it has also to negotiate with other players in natural language. So it needs to bring together strategic, natural language and social skills. Even though this is a model with a  niche application scope, I think it is at least as impressive as GPT-3, if not more so.

​​We are living in exciting times. And I think it’s important to understand that we are seeing the beginning of something, not the end. What will be possible in 20 years? Many things will happen, not all of them good. I’m not much of an expert in AI risks and besides, discussing them here in detail would go far beyond the scope of this blog post. Still I’m asking myself, how we as a society will cope with these – at the moment still largely unknown – changes.

My role at Creative Virtual involves looking at all the new technologies that pop up and evaluate if and how we can use them. So I have a bit of a front row seat in watching this unfold. I encourage you to check out our ChatGPT, GPT-3, and Your Conversational AI Solution blog post for a closer look at how we see these recent developments fitting with the work Creative Virtual does in the customer service and employee support space.

I think it is of the utmost importance that as many people as possible have a basic understanding about what’s going on. An ignorant society will not be able to react. I am trying to play my small part by sharing my knowledge with as many people as possible. Just recently, when an old friend of mine called, my wife burst out: ‘Great, now you can talk his ears off!’