Depending on who you talk to, artificial intelligence (AI) is either the devil or the second coming. But either way, there’s no denying that generative AI is having a moment.
AI tools have seen explosive growth over the past year or two, with powerful use cases already identified. However, such tools remain mired in controversy. They stand accused of taking jobs away from humans and of producing derivative content, or even plagiarism.
A recent article on Forbes that delves into the potential, and limitations, of AI says the following:
The disconcerting part is when people proclaim that generative AI is sentient. It is not…There is even the gloomier view that generative AI is a sideshow that is distracting us from the real breakthroughs that we will need to achieve sentient AI.
So what’s the deal? Is generative AI “bad”?
Below we’ll dive into some of the opportunities and concerns of AI writing, focusing particularly on how they relate to copywriters and the future of the copywriting industry.
There can only be one…or maybe two
Following right in the footsteps of Coke and Pepsi, PlayStation and Xbox, Apple and Microsoft – we love a two-party system – the AI space is currently dominated by two key players: Google’s Bard and OpenAI’s ChatGPT.
And those players are already having a legitimate impact on bottom lines.
Microsoft’s stock price rose with the announcement that it would incorporate ChatGPT into products like Bing, while Google’s dipped briefly at the beginning of 2023 when Bard answered a question incorrectly during a demo.
In other words, generative AI is big business. Significant investment into this space, to the tune of billions of dollars, is one reason that it has been able to grow so quickly. Another is the competition between ChatGPT and Bard to capture dominance of the mass market.
There’s a legitimate concern that ethics, safeguarding, and even understanding of AI hasn’t yet had the chance to catch up. A petition to pause AI experiments has been signed by more than 25,000 people, including industry leaders like Elon Musk and Steve Wozniak.
How does generative AI actually work? (and why is it divisive?)
Natural Language Processing (NLP) plays a huge part in how generative AI, as we know it today, functions. In simple terms, this is how NLP works: models help computers to “understand” the way humans communicate with unstructured data. This stands in stark contrast to the structured data, e.g., spreadsheets or barcodes, that computers usually deal with.
When people are underwhelmed by generative AI, it’s often because they’re expecting it to do too much. AI isn’t (yet…?) particularly good at building things from the ground up; the less specific you are with your prompts, the more likely you are to get poor output.
Ironically, given AI’s focus on science and tech, deploying it effectively is something of an artform. A post on Medium by Josep Ferrer on overhauling poor quality prompts has racked up more than 4,000 likes in just a couple of weeks, and he’s hardly the only one talking about this.
We find people asking questions like “can AI write an essay?” and the answer is a resounding yes, with a big “but.” If all twenty students in a class provide a generative AI application with similar prompts, all twenty essays will be very similar.
It’s also worth noting that the content within those essays may not be entirely accurate.
You can’t (always) trust generative AI
When describing ChatGPT, OpenAI says that it “interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
In OpenAI’s words, ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” They go on to list three reasons why this issue is difficult to fix:
1. During RL training, there’s currently no source of truth.
2. Training the model to be more cautious causes it to decline questions that it can answer correctly.
3. Supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
Generative AI is sometimes touted as a great equalizer, but that’s not quite true. Although expert knowledge isn’t required to use generative AI, those who don’t have it will need to do some rigorous fact-checking. That’s if they can generate relevant prompts in the first place.
This is why the work of so-called AI copywriters, with no knowledge of the principles of copywriting and advertising, is unlikely to measure up against trained writers…whether or not those trained writers are also using generative AI.
The future of generative AI
Right now, AI writing (and AI copywriting) is in its infancy. Generative AI still typically shies away from answering questions with opinionative responses. This means that, although it might be able to “convince” readers of benefits, in the marketing sense of the word, it lacks the conviction to truly persuade someone to change their point of view.
Writing for Startupy, Jason Shen makes the following case:
As long as human beings are the customers…there will be things that only human beings can offer. These include personal narrative—comedy specials, memoirs, group therapy—these rely on real people having real experiences that you can relate to.
The Forbes article referenced above echoes that sentiment:
No matter how good the AI becomes, humans will still crave and require other humans for the empathy and care they can provide. The human sense of humanity outweighs whatever the AI can attain.
There’s the rub: generative AI has gotten very good at responding to prompts in a dialogue, and it feels like that’s happened overnight. What it can’t do, at least not yet, is deal in emotions or experiences. It can’t, to put it another way, create something out of nothing.
Until that changes, AI won’t be taking over jobs (or, indeed, the world) to the extent that some people fear.