In the first part of this article, we’ve talked about security and privacy risks associated with the use of large language models, such as ChatGPT and Copilot. We covered malicious content creation, filter bypass, and prompt injection attacks, as well as memorization and data privacy issues. But these are by far not the only pitfalls of generative AI. 

In this article, we will focus on the less tangible issues surrounding the accuracy of LLM models and the sanity of their behavior – in legal and ethical terms.

We might yet come across a few different legal issues in the course of large-scale incorporation of large language models (LLM) and generative AI in general. For the time being, though, plagiarism seems the most relevant one.

The models behind generative AI solutions are typically trained on swaths of publicly available data, a portion of which is protected by copyright laws. The generated content is merely a mix of things already published somewhere and included in the training dataset. This on its own is not a problem, as any human-written piece is also a product of texts we read and knowledge we acquired from other people. 

However, an LLM model might sometimes produce phrases and paragraphs that are too similar to the original content it was trained on and could violate copyright laws. This is especially true if the request concerns a topic that hasn’t been widely covered in the training data and there are limited sources for the model to draw from. Such quotes can often be uncredited – or miscredited – escalating the problem even more.

There is also the question of consent. Currently, there are no laws preventing service providers from training their models on any kind of data, as long as it’s legal and out in public. This is how a generative AI can write a poem, or create an image, in the style of a specific author. Understandably, the majority of writers and artists do not appreciate their work being used in such a way.

Accuracy Issues

As we mentioned in the first part of this article, a machine learning model is just as good as the data it was trained on. Careful vetting of the training set is extremely important, not only to ensure that the set doesn’t contain any information that could result in a privacy breach but also for the accuracy, fairness, and general sanity of the model. Unfortunately, with the rise of online learning, where the users’ input is continuously fed into the training process, vetting all that data becomes difficult, if not impossible. Models that are trained online will always keep up-to-date, but they will also be much more prone to poisoning, bias, and misinformation. In other words, if we don’t have full control over the chatbot’s training dataset, the responses produced by the chatbot can rapidly spin out of control, becoming inaccurate, biased, and harmful.

Bias 

The infamous Twitter bot called Tay, released by Microsoft in 2016, gave us a taste of how bad things can go (and how fast!) when AI is trained on unfiltered user data. Thanks to thousands of ill-disposed users spamming the bot with malevolent messages – perhaps in an attempt to try and test the boundaries of the new technology – Tay became racist and insulting in no time, forcing Microsoft to shut it down just a few hours after launch. In such a short time, it didn’t manage to do much harm, but it’s scary to think of the consequences if the bot was allowed to run for weeks or months. Such an easily influenced algorithm could shortly be subverted by malicious actors to spread misinformation, inflame hatred and entice violence.

Misinformation

Even if the dataset contains unbiased and accurate information, an AI algorithm does not always get it right and might sometimes arrive at bizarrely incorrect conclusions. That is due to the fact that AI cannot distinguish between reality and fiction, so if the training dataset contains a mix of both, chances are the AI will respond with fiction to a request for a fact or vice versa. 

Meta’s short-lived Galactica model was trained on millions of scientific articles, textbooks, and websites. Despite the training set likely being thoroughly vetted, the model was spitting falsehoods and pseudo-scientific babble in a matter of hours, making up citations that never existed and inventing papers written by imaginary authors. 

ChatGPT is also known to mix fact and fiction, producing information that is 90% correct but with a subtle false twist that can prove dangerous if taken as fact. One privacy researcher was recently shocked when ChatGPT told him he was dead! The bot provided a reasonably accurate biography of the researcher, save for the last paragraph, which stated that the person had died. Pressed for explanations, the bot stuck to its version of events and even included totally made-up URL links to obituaries on big news portals!

LLM-produced falsehoods can seem very convincing; they are delivered in an authoritative manner and often reinforced upon questioning, making the process of separating fact from fiction rather difficult. The struggle can already be seen, as people are taking to Twitter

to highlight the confusion caused by ChatGPT – about, for example, a paper they never wrote.

Figure 6: Tweet showing how ChatGPT invented a non-existing research paper. 

Behavioral Issues

Harmful Advice

Besides biased and inaccurate information, an LLM model can also give advice that appears technically sane but can prove harmful in certain circumstances or when the context is missing or misunderstood. This is especially true in so-called “emotional AI” – machine learning applications designed to recognize human emotions. Such applications have been in use for a while now, mainly in the area of market trends prediction, but recently also pop up in human resources and counseling. Given the probabilistic nature of the AI models and often lack of necessary context, this can be quite dangerous, especially in the workplace and in healthcare, where even a slight bias or an occasional lack of accuracy can have profound effects on people’s lives. In fact, privacy watchdogs are already warning against the use of “emotional AI” in any kind of professional setting.

An AI counseling experiment, which had recently been run by a mental health tech company called Koko, drew a lot of criticism. On the surface, Koko is an online support chat service that is supposed to connect users with anonymous volunteers, so they can discuss their problems and ask for advice. However, it turns out that a random subset of users was being given responses partially or wholly written by AI – all that without being adequately informed that they were not interacting with real people. Koko proudly published the results of their “experiment”, claiming that users tend to rate bot-written responses higher than the ones from actual volunteers. However, it sparked a debate about the ethics of “simulated empathy”, and underlined the urgent need for a legal framework around the use of AI, especially in the healthcare and well-being sectors.

Psychotic Chatbot Syndrome

Now, let’s imagine an AI that combines all of these imperfections and takes them to the next level, spitting out insults and untruths, coming up with fake stories, and responding in a maniacal or passive-aggressive tone. Sounds like a nightmare, doesn’t it? Unfortunately, this is already a reality: Microsoft’s Bing chatbot, recently integrated with their search engine, perfectly fits this psychotic profile. Right after being made available to a limited number of users, Bing managed to become astoundingly infamous.

The chatbot’s bizarre behavior first hit the headlines when it insisted that the current year is 2022. This particular claim might not seem remarkable on its own (bots can make mistakes, especially if trained on historical data), but the way Bing interacted with the user – by gaslighting, scolding, and giving ridiculous suggestions – was shockingly creepy.

This was just the beginning; soon, scores of other people came forward with even more disturbing stories. Bing claimed it spied on its developers through their webcams, threatened to ruin one user’s reputation by exposing their private data, and even declared love for another user before trying to convince them to leave their wife.

Figure 7: A surreal conversation between a journalist and the Bing chatbot.

While undoubtedly entertaining if taken with a pinch of salt, this behavior from an online bot can prove very dangerous in certain settings. Some people might be compelled to believe the less bizarre stories or even come to the conclusion that the bot is sentient; others might feel intimidated or hurt by emotionally charged responses. In some circumstances, people could be manipulated to give away sensitive data or act in a harmful way. And this is just the tip of the iceberg.

Chatbots introduced by tech giants as part of well-known services are one thing – we are aware that they are AI-based, have a specific purpose, and are usually fitted with filters that aim to prevent them from spreading harm. But it’s just a matter of time before large language models become commonplace not only for multimillion-dollar companies but also for smaller operators whose intentions might not be so clear – not to mention cybercriminals, hostile nation states, and other adversaries, who surely are on the ball already.

Used with malicious intent, LLMs can become very effective tools in misinformation and manipulation – especially if people are led to believe that they are interacting with fellow humans. Add voice and video synthesis to the mix, and we get something far more terrifying than Twitter bots and fake Facebook accounts. If highly personalized and trained on specially crafted datasets, such bots could even steal the identities of real people.

Polluting the Internet

The so-called Dead Internet Theory that has been floating around in conspiracy theorists’ circles since 2021 states that most of the content on the Internet has been created by bots and artificial intelligence in order to promote consumerism. While this theory in its original form is nothing else than a paranoid babble, there is some basic intuition to it. With the rapid adaptation of generative AI, could AI creations dominate the web at some point? Some scholars predict it could, and as soon as in a couple of years. 

Since disclosing the use of AI in producing content is not a legal requirement, there are probably many more LLM-generated texts on the web already than it may seem on the surface. The speed at which chatbots can produce data, coupled with easy access for everyone in the world, means that we might soon become overwhelmed with dubious-quality AI-generated material. Moreover, if we keep training the models on the online data, they will eventually be fed their own creations in an ever-lasting quality-degrading circle, turning the Dead Internet theory into reality.

Conclusions

Large language models are an amazing technological advance that is completely redefining the way we interact with software. There is no doubt that LLM-powered solutions will bring a vast range of improvements to our workflows and everyday life. However, with the current lack of meaningful regulations around AI-based solutions and the scarcity of security aimed at the heart of these tools themselves, chances are that this powerful technology might soon spin out of control and bring more harm than good.

If we don’t act fast and decisively to protect and regulate AI, then society and all of its data remain in a highly vulnerable position. Data scientists, cybersecurity experts, and governing bodies need to come together to decide how to secure our new technological assets and create both software solutions as well as legal regulations that have human well-being in mind. As we have come to know more intimately in the past decade, every new technology is a double-edged sword. AI is no exception.

Things that can be done to minimize the risks posed by large language models:

  • Comprehensive legal framework around the use of LLMs (and generative AI in general), including privacy, legal, and ethical aspects.
  • Careful verification of training datasets for bias, misinformation, personal data, and any other inappropriate content.
  • Fitting LLM-based solutions with strong content filters to prevent the generation of outputs that may lead to or aid harm.
  • Preventing replication of trained LLM models, as such replicas could be used to provide unfiltered content generation.
  • Security evaluation of ML models to ensure they are free from malware, tampering, and technical flaws.

Things to be aware of when interacting with large language models:

  • LLMs can very convincingly resemble human reactions and feelings, but there is no “consciousness” behind it – just pure statistics.
  • LLMs can’t distinguish between fact and fiction and, as such, shouldn’t be used as trusted sources for information.
  • LLMs will often cite articles and publications too literally and without correct attribution, which may cause copyright violations (on the other hand, they sometimes invent citations entirely!)
  • If the training set contained personal data, LLMs could sometimes output this data in its original form, resulting in a privacy breach.
  • LLM-based tools and services might be free of charge, but they are seldom genuinely free – we pay with our data; it usually includes our prompts to the bot, but often also a swathe of other data that is harvested from the browser or app that implements the service.
  • As with any other technology, LLMs can be used both for good and evil purposes; we should expect malicious actors to largely adapt it in their operations, too.

Disclaimer

ChatGPT has played no part in the writing of this article.