Creative Science Writing in the Age of ChatGPT
Human creativity is hard to beat.
I’m thinking of poetry, of art, of my favorite science fiction novels and authors. I’m thinking of Vonnegut, Bradbury, C.S. Lewis, Tolkien, Asimov, Heinlein, Octavia E. Butler, and my new obsession Sylvain Neuvel. These writers broke down walls of possibility in the form, content, symbolism, expanse and reach of their work. I mean, Tolkien's ability to build a map of a whole new world in your mind? Enough said.
We have a keen ability to dream up new scenarios, objects, characters and worlds. Our ability to create meaning by layering our observations of the world with our emotions and our imagination is unparalleled.
How close can AI really get to our potential for meaning-making?
Like many others, I’ve been playing with ChatGPT a lot over the past week. I’m impressed by this technology. I’m most excited by how quickly she (I’ve decided ChatGPT is a woman - we tend to anthropomorphize software with conversational abilities) can curate and summarize information on a topic of interest. Although any responsible human will see the need to verify the chatbot’s responses (she has no qualms about “lying”), ChatGPT can supply a great starting point for background research on many topics. I’ve used ChatGPT to inform my Google search terms on new-to-me topics for an accelerated writing process.
I’m also impressed by how accessible she makes her responses. Her writing is decent - she writes in short, digestible paragraphs. Her responses to questions and requests for stories demonstrate a solid flow of information, or a beginning, middle and end.
What I’m not as impressed with is ChatGPT’s potential for creativity. Her approach to creatively answering a question involves mashing up, albeit cleverly, things she has seen before. Her stories often read like a vague plot summary without the meat of what makes a story a story: specific and vivid details, complex characters who don’t always make sense, and the insights of people facing down capital-L life.
“The AI chatbot is trained on text from books, articles, and websites that has been “cleaned” and structured in a process called supervised learning.” - Amit Katwala
ChatGPT’s responses often lack depth and nuance. Great (human) writers carefully choose every word and detail to fit a message, feeling or meaning they want to communicate. They carefully consider their audience (and their characters) and imbue their writing with relevant cultural, social and historical context.
Great writers also leave much of themselves on their pages - their secrets, vulnerabilities, fears, quirks, mistakes, irrational thoughts, unanswered questions, loves and passions. We resonate with other people’s vulnerable accounts of their own experiences. That writing quality is something that ChatGPT will always lack.
(If I were a great writer, I’d admit that I’m terrified of becoming obsolete as I age as a woman in science communication. Case in point, the common saying “explain it like you would to your grandmother.” I feel the pressure to stay current. Case in point, this blog post!)
ChatGPT will say anything she thinks sounds good (or rather, what she’s been trained to recognize as a plausible human answer). She “generates text based on patterns … digested from huge quantities of text gathered from the web.” That means responses are rife with misinformation. ChatGPT has a limited ability to write content that is creative and that accurately speaks to human experience.
What does this mean for creative science communication?
ChatGPT could be a helpful tool for science writers and scientists wanting to make their work more accessible to broad audiences. But to be valuable, ChatGPT needs a very discerning user. Asking ChatGPT to explain any scientific concept or finding you don’t fully understand yourself is asking for trouble. You need to fact-check every generated statement before sharing it with others. The chatbot can’t reveal its sources, and there’s no guarantee it’s accurately representing its sources anyway. ChatGPT is no search engine.
Ok, so we can’t trust ChatGPT to accurately answer science questions, and she’s not that great at creatively communicating about the science we already know. (For example, a chatbot’s explanation of a scientific discovery can’t replace the story of that discovery through the eyes of the people involved - the struggles they faced, the purpose they found, the emotions they felt.) So what is ChatGPT good for?
Well, she’s pretty entertaining. Playing with ChatGPT is motivating me to write more. Asking for writing prompts or ideas is a great way to overcome writer’s block! The chatbot’s responses have given me idea nuggets to explore further. I asked her for some character and plot ideas around a particular scientific topic, and then I played with writing stories based on her answers.
Refining ChatGPT responses in an iterative process, with a goal in mind, can help a writer refine what they really want to say. What rings true and what doesn’t in the chatbot’s responses? What facts or perspectives are missing? Where does the logic break down? How could YOU improve what the chatbot has to say?
We assume that ChatGPT’s responses are somehow optimized because the model pulls from so much data we don’t have as individuals. But I’ve asked ChatGPT to copy-edit my work several times and have been disappointed with the results. Her edits and explanations are oversimplified, circular, repetitive and boring compared to my original text, in my opinion.
“It relies heavily on tropes and cliché, and it echoes society’s worst stereotypes. Its words are superficially impressive but largely lacking in substance.” - Amit Katwala
So before you use ChatGPT to help you creatively communicate science, here are some things to consider.
The strengths of ChatGPT:
It can help you brainstorm.
It can help you find simpler ways to explain complex science.
It’s decent at providing simple explanations and definitions of common scientific terms or concepts.
Ask it for writing, character or plot prompts to inspire you in creative writing.
It can help you quickly identify key points or pieces of information on a science topic (verification or fact-checking required).
The weaknesses of ChatGPT:
It generates responses based on content and data that are a few years old.
There is no guarantee of accuracy, ethics or diversity of thought in its responses.
It will make things up and present them as fact. Trust nothing it writes on faith!
It can create engaging science content that has no basis in reality. That’s dangerous. People are more likely to believe and remember content that has style or is story-driven, conversational, eloquent and aesthetically pleasing. ChatGPT can produce very true-sounding BS.
“It's pulling from texts on the internet. If it's not limited to rigorous scientific research papers, the foundational text could be flawed.” - Lauren Goode
What does ChatGPT have to say for herself? She generally agrees: “While I can assist in creative tasks, I don't have the potential to be creative like a human mind does.”
I asked ChatGPT to write a summary of this post “as Kurt Vonnegut would have.” If you want writing that's truly creative, imaginative and powerful, then stick with us humans. We may not be as fast or accurate, but we've got something that machines can never have: a heart.
It’s funny that the chatbot claims superior accuracy - not so fast, my friend. First, you’d be stumped if you didn’t have Vonnegut’s work to imitate (which you only did in style, at most). Second, I removed many empty words from your initial response.
Finally, I think Vonnegut would have said something more like this: They will say, ‘wait till you see what AI chatbots can become.’ But it’s you who should be doing the becoming. What you can become is the miracle you were born to work - not the damn fool chatbot.