Generative AI as a writing tool
This post is a revised version of The role of generative AI in writing from January.
A little more than two years ago, ChatGPT was officially launched. With it, the general public finally got access to OpenAI's brand new GPT-3 model that had been in development for the preceding years. The technology underpinning the model is GPT, a kind of neural network trained on textual patterns and capable of reproducing them with high accuracy. Basically, your iPhone's keyboard autocomplete on steroids. What followed was a cycle in which many LLMs were trained and refined in rapid succession by the giants from Silicon Valley.
At the time of writing, GPT-4o is the latest and greatest model available, and the website is particularly popular among students, who utilize it for writing assignments in high school and college. As a tool. Or, realistically, letting it generate the entire essay for them.
This has many teachers rightfully concerned. Because—surprise!— those assignments are not merely meant as a means of bullying students. They are, in fact, vital practice for developing writing skills. Turns out, that if a computer does all the hard work for you, making homework isn't nearly as effective as it could be in developing those skills. So really, by using ChatGPT or similar LLM-based tools to do your work for you, you're denying yourself a chance to improve your writing.
Schools have therefore decided that 'AI bad'. Since its introduction in 2022, I've repeatedly heard teachers call for a complete ban on using the technology, calling it plagiarism and fraud. However, like everything, the matter is more nuanced than a binary 'good' or 'bad'. As an example, let's imagine a hammer. It can be used to construct things, or, on the other hand, it can also be used as a weapon of destruction. The effect of a technology is defined by its use.
About that: LLMs are—mistakenly—mainly used to generate text. I've eerily quickly developed an eye for AI-generated prose; it's mediocre, dull, repetitive, and overly academic and formal. Moreover, despite numerous attempts to prevent it, models still hallucinate fairly often.
Hallucination is a behavior exhibited by generative AI models when they run out of tokens or encounter prompts not covered by their training data. In these situations, the model cannot accurately predict the next word (remember, it's autocomplete on steroids). Instead, they just make something up that sounds plausible. This means that nothing generated by these models can be trusted to be factually correct.
Instead, generative AI is way more powerful as a tool to manipulate and transform language. As Linus wrote a while back:
Not "computers can complete text prompts, now what?" but "computers can understand language, now what?"
The AI models are unable to understand what they are writing about. They are only selecting words that sound plausible given the context of their training data and the prompt. They cannot reason, they cannot search for information, and they cannot solve complex problems. But they do know language. That's where their strength lies. Not the ability to generate okay-sounding high school essays, but their ability to understand and transform language. Use it to your advantage.
Instead of prompting "generate an argumentative essay on the role of religion in public schools", we should prompt things like "rewrite this paragraph to be more academic" or "can you show me alternatives to this sentence?"
I cannot stress this enough. Generative AI is not a source. It's a tool to manipulate language. It's not a tool to look up information, and never will be. A neural network can, by definition, never return the sources from which it got certain information. Never.
However, even if we were to completely ban the use of generative AI at school, enforcing a ban like that will prove harder and harder the further models like GPT evolve. Detection software of any kind is always flaky; prone to false positives and negatives. Besides, it probably won't restrain students from using models like them, be it in more sneaky ways. The only possible scenario will be a cat-and-mouse game where the AI and detection software will both become increasingly better at fooling one another.
With every upcoming technology, there is a period where society readjusts to its introduction; rules and customs may need to be changed, and things will need to settle into a new normal. We do not have a time machine, we cannot uninvent large autoregressive language models. The technology exists now; either fight it or embrace it.
Therefore, I propose we teach AI literacy in high school and college. Teach students about the risks and benefits of using generative models as a writing tool. Not a cheat code. Not a search engine. A writing tool.
Handing in work not written by you is plagiarism. You still have to write yourself. However, using an LLM for checking grammar, paraphrasing sentences, reordering paragraphs, finding the right words and unblocking you creatively is not cheating. For some people, language is a huge barrier to getting their ideas down on paper. If a tool can help them do that more effectively, the use of that tool should be encouraged, not banned. In short, you do the thinking, and the computer helps you with the stylistics. It's not unlike other tools we're already using, such as spellcheck, dictionaries, translators and the thesaurus.
ChatGPT and similar models are not a threat to writing. Instead, they are merely new tools that can—if utilized correctly—further improve our writing. Use them wisely.
*this post was written by me, but edited and checked for grammar mistakes using AI.