AI attention over the past couple of months has been quite astounding.  Especially considering that it has been part of our every-day lives for so long already.  Face-ID to open our mobile phones – AI.  Social media – AI.  Voice Assistants like Siri and Alexa – AI.  Route mapping – AI.  Its attractiveness and almost universal application is undeniable, with global spending on AI-centric systems expected to reach $154 billion this year according to IDC.

Used extensively across every industry, delivering huge benefits and rewards, AI has been transforming businesses and the workplace for decades. In healthcare it is used in radiology results analysis and robotic assisted surgery, in banking it is the core of fraud detection, and in retail and e-commerce it is the behind personalisation, integral to effective inventory management and powers customer service chatbots.

AI used in education is enabling personalised learning and in transportation it has made driverless vehicles possible.  Businesses of all sizes are using it to deliver better customer and employee experiences with more human-like chat and voice bots, to strengthen cybersecurity, and to build more effective workflows and operations.  And these are just a few of the myriad of applications.

At Creative Virtual we have been at the forefront of helping businesses build better customer and employee experiences with innovative conversational AI solutions for nearly 20 years. Since inception we have always ensured that we practice ethical technology. We see this as a fundamental responsibility that we take seriously, and it is something that our customers expect.

There has always been adoption hesitancy when it comes to new technologies.  Issues such as job erosion, privacy, surveillance, behavioural manipulation, fake news, changing labour force/job erosion and bias are just some of the topics that come up in impact analyses of advanced technologies.

Whilst practising ethical tech comes naturally to Creative Virtual, we are not blind to the fact that there are bad actors exploiting technology and many questions on the wisdom of a concentration of power among a small elite of tech giants, with no regulatory oversight.

It is critical that the business models of companies in a position of power have robust systems in place that take account of legislative and ethical responsibilities relating to the privacy, security, discrimination and misinformation and other issues of today.

Technology has always moved faster than legislation and regulation, most recently for example we have seen this with Uber.  This is also the case in relation to AI, with questions being raised on whether self-regulation is strong enough to safeguard the rights of individuals, protect, promote and support cultural diversity, halt the spread of mis/false information, and ensure adherence to data and privacy legislation.

The continual questioning and interrogation of the social and economic impacts of new technology must happen concurrently with tech advancement and progress, with one not stopping the other.

It seems, from the media coverage at least, that ChatGPT took the world by surprise when it burst onto the scene back in November 2022.  Almost overnight, the world became obsessively captivated by AI.

The focus by mainstream media on AI technology might be new, triggered by ChatGPT, but it has been around (admittedly not as powerful or impressive – but technology is always evolving and improving) and used by businesses for quite a long time.

Nevertheless, it is being seen as ‘new’ in terms of massification and consumerisation which has led to a lot of AI hype filling print, digital and broadcast media platforms.  It has been simultaneously sensationalised, in some cases demonised, and satirised.

Most recently, it has also been politicised. We have all read that countries such as Russia, China, North Korea, Cuba, Syria, Iran and Italy have banned ChatGPT.  It should be noted that Italy is a bit of a different case to the other countries. The Italian government has banned the application because they are concerned with privacy and data, stating that “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users.”

Russia, China etc. have banned ChatGPT because they either have strict rules against the use of foreign websites and applications, full on restrictions in the use of even the internet, or they have strict censorship regulations.

ChatGPT is also on the ‘watch-list’ of several countries. France, Germany, Ireland and the UK have indicated that they will be monitoring the use of the application closely for “non-compliance with data privacy laws”, and they have also raised concerns about algorithmic bias and discrimination.

It is not only governments that are questioning AI, a moratorium on the development of AI has been proposed by tech celebrities Elon Musk and Steven Wozniak.  They have put their signatures to an open letter, along with prominent and respected AI researchers such as Yoshua Bengio, Stuart Russel and Gary Marcus, asking for “… all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.  This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Of course, the concerns voiced by governments, individuals and the industry, are valid.  They are centre stage in the ongoing debate of the ethics-morality-technology-societal quadrangle. This discussion should continue. However, proposing a pause or ban on technological advancement is not a sensible or necessary response.

As previously mentioned, Creative Virtual’s approach when building and deploying conversational AI solutions is from a position of responsibility, deploying tech in an ethical and moral way. This includes understanding the intended outcomes of the conversational AI technology solutions we build and deploy, and to debate potential unintended outcomes so we can mitigate these.

For example, it is a known that GPT-3, 3.5 and even the latest model GPT-4 are not fully reliable (humans are not reliable all the time either!) and there is no guarantee of 100% accuracy.

Yes GPT-4 is more accurate than previous versions, but it does still “hallucinate”, can give inaccurate information and harmful advice. Businesses must be able to mitigate the risks this poses to avoid financial and reputational damage and be in control of the information their company is sharing.

To ensure organisations are in control, Creative Virtual supports large language models (LLMs) including the latest versions of GPT, but we remain uncompromising on not transferring total authority to machines.

Our conversational AI solutions provide a signature blend of AI and rules-based natural language processing (NLP), with the AI component compatible with workflow functionality to allow for customisable configuration options. It also means that our systems improve continuously in a reliable way that meets the needs of an organisation.

At the same time, natural language rules can still be used to enable control over responses in instances when AI answers are insufficient. Our blended approach ensures accuracy, enables the resolution of content clashes, and delivers very precise replies when needed.

This level of enterprise-grade functionality differentiates Creative Virtual’s conversational AI platform from all others on the market today. Providing this high degree of control over the AI is critical for businesses. Organisations can be confident in the accuracy of what they communicate to customers and employees because we enable human judgement to be applied to the information created by AI.

Recent commentary about ChatGPT has highlighted examples of its imperfections as well as potential immediate and longer-term social implications. At Creative Virtual we make it possible for our customers to mitigate these risks whilst still enjoying the business benefits of large language models, specifically GPT-3.5 and 4.0 today.

Using our V-Person technology, real business concerns regarding the security, data, privacy, and accuracy aspects related to information sharing are moderated, and organisations retain full control over AI output.

We are already working with customers and introducing LLMs as part of their conversational AI solution for tasks to deliver better customer, employee and contact centre agent experiences.  After identifying specific use cases we are piloting a number of GPT capabilities that are changing the playing field, including vector matching, summarisation, text generation, translation, clustering/analysis, Q&A preparation, and using generative AI.

Implementing LLMs requires experience and expertise, especially given the rate at which AI is developing.  In conversational AI, knowledge management is critical. Creative Virtual’s orchestration platform – V-Portal – supports LLMs, enabling businesses to maximise the benefits of the latest technology and can do so safely, securely and with confidence.

Our V-Portal platform combines knowledge management with workflow management and user management, supports multiple versions of answers for a single theme which gives granular control over the responses given, and allows for optimisation for individual channels.  It also has the capability to manage multi-lingual solutions within a single knowledgebase.

Organisations using our V-Portal platform have options for presenting users with a specific response based on a variety of criteria, including channel, authenticated user profile or selected language.  The platform also supports the use of rich media such as diagrams, images and videos in addition to text and hyperlinks within answers.

The flexible architecture of V-Portal enables seamless integration of our conversational AI solutions into existing processes and technology infrastructure, ensuring business continuity and as a cloud-based solution upgrading to take advantage of the newest technologies and stay ahead of the competition could not be easier.

AI will continue to capture headlines for many years to come.  The good, bad and ugly will be debated.  How society, businesses and individuals choose to use AI is a big part of the positive impacts it will have.

As a business tool, conversational AI solutions powered by the latest in AI technology advances can supercharge employee, customer and contact centre agent experiences, whilst also delivering cost and efficiency savings, and improving productivity.

It’s all about having the right conversational AI partner who understands the technology, business challenges and responsibilities, and can build and deploy solutions that meet the real needs of business.Contact us to find out more on how LLMs can help you deliver better employee, customer and contact centre agent experiences.