With AI finishing your sentences, what will happen to your unique voice on the page?

It’s a familiar feeling: You start a text message, and your phone’s auto-complete function suggests several choices for the next word, ranging from banal to hilarious. “I love…” you, or coffee? Or you’re finishing an email, and merely typing the word “Let” prompts your app to suggest “Let me know if you have any questions” in light gray text.

Predictive language technologies have become so routine – baked into smartphones, email services and chatbots – that we barely notice them anymore. But they raise a difficult question: What happens to a writer’s unique voice when AI routinely completes their thoughts – or generates them altogether from scratch?

As the chair of a large English department – and as a scholar who researches the effects of predictive writing – I’ve witnessed firsthand the challenges that generative AI systems such as ChatGPT, Gemini and Claude pose for individual expression.

This technology has been incorporated into the writing process so fully that it’s almost impossible to imagine encountering a scene from the not-so-distant past: a writer, alone, with a pen and a piece of paper, wrestling with how to best translate their ideas, arguments and stories into something legible and interesting.

Predictive text leads to predictive writing

As many scholars have noted, though, this vision of writing was never fully accurate.

Essays have always incorporated guidance from teachers, professors or writing tutors. A friend might give feedback, or your favorite novelist’s turn of phrase might offer inspiration. The language we use is never fully “ours,” but draws on millions of sources absorbed over the course of our lives.

Just as it’s a myth to imagine that writers compose in a vacuum, there has never been a clear line between genuine human expression versus machine-generated text. As scholars have pointed out, we have been using machines to communicate for a long time. Every technological development – from the quill pen and the typewriter to the word processor – has brought with it changes in how humans express themselves.

However, the ubiquity of predictive language technologies directly threatens human creativity – or, as one study put it, “Predictive Text Encourages Predictive Writing.”

Because generative AI composes and suggests text in highly standardized, predictable patterns, its outputs can read as if they’re dressed-up versions of what linguists call “phatic expression.” These are the overly common phrases that function as social glue more than as conveyors of sentiment: “How are you?”, “Have a good day” or “See you soon.”

But this glue can lose its hold if the technology is used in the wrong situations. Using artificial intelligence to compose a social media post in the wake of a tragedy, or using it to write a fan letter to an Olympic athlete, comes off as insincere.

People are starting to catch on to generative AI’s prose, not because it’s clunky or poorly written, but because it all sounds the same. That’s because large language models are trained on gigantic masses of examples of human writing, and they predict text based on probabilities and commonalities.

Those predictive outputs often end up producing a singular, recognizable voice. Or, as Sam Kriss explained in a recent essay for The New York Times Magazine, “Once, there were many writers, and many different styles. Now, increasingly, one uncredited author turns out essentially everything.”

Slouching toward a cultural mean

Generative AI is accelerating the types of cultural convergence and uniform expression that were already happening.

For example, linguists have shown that regional accents in the U.S. are fading and becoming homogenized due to a mix of migration, urbanization, mass media and social media. Meanwhile, American English continues supplanting many other forms internationally due to the global predominance of U.S.-based media, TV, film and more.

Are we all destined to write and speak alike? Generative AI doesn’t know in advance whether you call soft drinks “soda,” “pop” or “coke.” If you let it choose, it will simply select “soda” for you, since that’s the most common term in its training data.

By contrast, what people typically value in a personal essay, novel, poem or message to a grieving friend is the ability of the human author to demonstrate – clearly and distinctly – something powerful and singular.

Making chatbots less appealing

So how can teachers compel students to craft their own voices? How is that task different today than it was even a decade ago?

It helps to think here about where generative AI struggles, and why.

Chatbots are great at creating relatively bland, highly readable prose, since that’s what is omnipresent in their training data. But they struggle to create the kinds of radically unexpected shifts that appear in novels like James Joyce’s “Ulysses” or songs like Queen’s “Bohemian Rhapsody.”

Several techniques exist to encourage these types of stylistic leaps among student writers.

Teachers can bake unpredictability into the assignment. Creative writing instructors have used techniques for decades to encourage out-of-the-box thinking. They might ask students to draft a poem and then rewrite it while avoiding the letter “E,” or limit themselves to two adjectives at most.

Another tactic involves having students draw from distinctly personal experiences. Teaching students how to explore connections between characters and conflicts in a novel to people and situations in their own lives makes resorting to chatbots less appealing, if not altogether useless. By contrast, impersonal assignments – “Discuss the symbolism of the color green in ‘The Great Gatsby’” – will likely produce generic, predictable results.

Teachers can also ensure the work of their students has a range of readers. If it’s just the professor, students may be less likely to invest time into cultivating their own voice. But if they have to write an essay or story for, say, their friends or their grandparents, they might have more of an incentive to sound like themselves.

Many other strategies exist, from being forced to reverse the argument of an essay to favor the other side, to interviewing strangers for an assignment and including their quotes.

The bottom line: Writers have access to sources – and language – that machines cannot access or generate. Having students wrestle with unconventional modes of composition and revision lies at the heart of ensuring that the technology is more of a helpful thought partner, but not a substitute for their voice.

Source link

Gayle Rogers, Professor of English, University of Pittsburgh

Gayle Rogers, Professor of English, University of Pittsburgh

Leave a Reply

Your email address will not be published. Required fields are marked *