Pictet Group

The new chattering class: how ChatGPT will change communication

The new chattering class: how ChatGPT will change communication

Large language models like ChatGPT are a new technology that will change the way that we communicate: mostly for the better.

Not so long ago, artificial intelligence language models produced little more than baby talk. But seemingly overnight, their latest iterations are glibber than cocky undergraduates. Undergraduates spouting revolution. 

“We’re in an age of serious disruption,” argues Lane Greene, The Economist’s language guru, co-author of its latest linguistic style guide and author of its Johnson column. Large language models (LLMs) – the best known of which is ChatGPT – have already started to change the way we do things. 

Although ChatGPT is very effective as a search engine, its most dramatic capability is to generate text that sounds like it was written by a better-than-average human writer. This includes translation, writing emails and letters and even longer-form articles. This function allows ChatGPT to supplement or even supplant humans in translating routine material, or in assembling well known facts into clear narrative. It can produce boilerplate material for lawyers and process and synthesise reams of their documents.  Other versions of generative AI can create photographs and extend paintings. The more inputs theycan be trained on, the more effective it’s likely to be.

For many white collar workers it might seem a threat to their livelihoods – but it needn’t be for all of them, argues Greene. While the likes of ChatGPT will get ever better at reproducing mundane tasks, they will also leave people with time to do more challenging and interesting jobs, including overseeing LLMs’ output.

ChatGPT could even prove to be the salvation of less effective and less productive workers. Writing in the Financial Times, columnist Tim Harford cites studies that show the concentration of productivity improvements in the least effective workers – Homer Simpsons, as he calls them – whereas the technology adds little to the output of the most competent ones.1

Discovering limits...

For all their revolutionary capabilities, LLMs have a number of problems. Like the fact that they make stuff up. Their creators aren’t quite sure why, says Greene, but LLMs routinely include information that isn’t true. It usually sounds convincing, but that makes it even more dangerous. 

One New York lawyer recently found to his cost the dangers of relying on ChatGPT. He used the LLM to draft a legal brief  in a personal injury case only to be censured by the judge for citing six non-existent court decisions.2

Teachers have worried about how to respond to pupils shirking their studies by using ChatGPT to produce essays. One way is to focus on ChatGPT’s weaknesses. Some teachers have started asking their students to prompt ChatGPT with essay topics and then analyse the output, searching for false information.

ChatGPT “hoovers up a huge quantity of material on the internet” and it’s good at regurgitating what it’s fed, but it does less well on knowledge we all take for granted or on questions that require logical thinking. So, for instance, a ChatGPT doesn’t make the assumption that someone alive an hour ago and who’s alive in two hours’ time will also be alive in the time between, Greene notes. There isn’t much material on the web pointing this out, because for humans it’s not something that needs to be said.

Another problem is that ChatGPT adopts the biases of the material it’s trained on, Greene says. The broader the material the harder it is to offset the bias. It’s possible to try to remove or mitigate these biases by judiciously selecting the training content, but that then limits the usefulness of the model. Another approach is to set explicit rules overlaying the training data, saying “do not answer questions that invite you to be racist”, but such rules are unlikely to ever be perfect.

At the same time, the breadth of material ChatGPT draws on means it can provide illegal or dangerous information, for example how to make drugs like crystal meth, or napalm. ChatGPT’s creators have introduced safeguards, but users discover new workarounds every day, Greene notes. Those back doors make LLM’s creators vulnerable to being sued or to criminal prosecution.

...but progress won't stop

Greene grades ChatGPT on three dimensions. “For ‘retrieve me a fact’, it gets an A-plus, for imitation, it gets a passing grade. Combining two different things, it’s barely passing. It suffers from big flaws,” he says.

But for all ChatGPT’s limitations, it has progressed rapidly and is likely to do so further. But given the structures of these models – not least their reliance on training data –  they will struggle to produce true originality.

“Finding out things that nobody else knows, or very few people know” will be left for humans, says Greene. That means feet-on-the-ground journalism is unlikely to be replaced, even if ‘churnalism’ – the rehashing of existing material – is. Originality and novelty will be a major differentiator. A black box won’t be generating deep, original thinking. There will always be room for memorable, simple, clear prose, enlivened by fresh imagery and metaphors.

ChatGPT can give a starting point to written work, but human input is needed to make the output sing. Meanwhile, translators will be post-editors, using LLMs to generate the first draft, but then reviewing it to ensure that nuance isn’t lost and to avoid subtle linguistic pitfalls.

People are hungry for authenticity. ChatGPT might well change what we want from the written form, but it won’t get rid of writers. “Just as photography didn’t kill off painting, it just changed what and how was painted,” Greene says.

[1] https://www.ft.com/content/74c63f77-f543-4be0-9cd9-2f5f40ba6f17
[2] https://www.reuters.com/legal/transactional/lawyer-used-chatgpt-cite-bogus-cases-what-are-ethics-2023-05-30/
Confirm your selection
By clicking on “Continue”, you acknowledge that you will be redirected to the local website you selected for services available in your region. Please consult the legal notice for detailed local legal requirements applicable to your country. Or you may pursue your current visit by clicking on the “Cancel” button.

Welcome to Pictet

Looks like you are here: {{CountryName}}. Would you like to change your location?