Don’t AI my Science Art
If there is a buzzword in science and communication, it is generative AI. Artificial intelligence technologies and tools may still be in their infancy, but they are here to stay. Generative AI consists of “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on” (IBM).
It’s difficult to think of a field not touched by AI. Science benefits from AI’s ability to perform time- and computation-expensive tasks like predicting protein folding, cleaning large datasets, and filtering, categorizing, and summarizing massive amounts of data. Today, scientists use generative AI to “process data, write code, and help them write papers.”
Here’s the thing — as we collaborate with generative AI tools that produce human-mimicking outputs, we begin to anthropomorphize these tools and even think of them as human colleagues. Have you caught yourself saying please and thank you to ChatGPT, or asking it for opinions on personal matters? You aren’t alone. But these models aren’t human and don’t think, see, or interpret the world as humans do. They don’t understand context. That’s dangerous to forget as we rely on generative AI to help us do inherently human work, such as storytelling and art.
“Communication was and continues to be our most valuable innovation. It has assisted us in preserving and passing on our information, learning, discoveries, and intellect from person to person and generation to generation.” - Ethics Concerns in the Use of Computer-Generated Images for Human Communication, Journal of Ethics in Higher Education
Effective science communication depends on telling compelling stories about science and its impacts. But perhaps more importantly, it depends on building understanding, relationships, and trust between people within and outside the scientific community. Do we want to leave this task up to AI-generated content?
“Given the much-publicized propensity of generative AI tools to produce nonsense, science communicators should consider whether generative AI is in fact completely antithetical to the very purpose of their work.” - Amanda Alvarez, Science communication with generative AI, Nature
The Wild West of AI for SciComm
I have long been an emerging media optimist. In 2011, I jumped on the science blogging train. I saw the potential of blogging platforms, and then social media platforms, to elevate content created by individual scientists and science writers. Theoretically, these platforms allow minority voices to bypass traditional news media gatekeeping. (Whether those voices actually break through the noise is a different question.) These platforms promised to democratize science communication, allowing more voices into the mix and increasing public access to behind-the-scenes science.
Of course, user-generated content platforms also generated new problems. In the early “Wild West” days, blogging scientists jumped on their chance to address what they saw as a “war on science.” They often wrote content that —- let’s just say —- didn’t leverage best practices in science communication and trust-building. Bloggers didn’t have to abide by journalistic ethics and standards of using reliable sources, fact-checking, avoiding bias, and providing context. They still don’t have to, but by and by science blogging has become professionalized, with communities of online science writers and institutional science blogs establishing their own standards.
I think AI-generated content is where user-generated online content was in the early 2000s, in its Wild West days, or its “frontier period characterized by roughness and lawlessness,” as Merriam-Webster defines the term. Too many people are embracing the theoretical potential of AI-generated content without considering its downsides.
Why AI-generated Science Art is Particularly Fraught
“Art and writing allow us to peer into each other’s lives in a very granular way, and with a lot of depth …” - Angie Wang
Scientists are increasingly getting at least some exposure to effective science writing practices and principles, but art and design training and skills development are rare. We’ve only recently started acknowledging professional artists, storytellers, and other creatives as critical collaborators in the science communication enterprise… and boom, AI art generators threaten to cut them out of the process. Why? Scientists and science communicators may use AI generators as a subpar alternative to working with artists.
Many artists and storytellers will tell you they aren’t worried about AI making them obsolete. It can’t compete with a human’s unique perspective. In the words of artist Angie Wang, what a diffusion model could “produce by abstracting from a lot of different artwork” is no match for “the nuances of another individual’s experience.” However, using AI to generate visuals for science communication is tempting because the outcomes are fast, cheap, and relatively effortless.
“Wouldn’t it be cool to imagine something and have it come into being right away? But spending your effort on something brings you to a certain timeless space to think through what you want to say, and allows you to infuse your perspective and your individual quirks and choices into your art. When you let a machine handle the details for you, you can get tricked into anchoring your vision to whatever’s been prefabricated for you, simply because what it produces is finished—a fully baked image (averaged from the creations of many other skilled artists). A finished image can trick your mind into thinking it’s a good image, but they aren’t the same thing.” — Angie Wang
Angie refers to artists using generative AI as part of their creation process, but related issues arise for scientists and science communications using AI art generators. Using these tools to create visuals to communicate their science, scientists lose opportunities to learn from artists. There are amazing science artists and designers with the honed skill of engaging diverse audiences via awe-inspiring, thought-provoking, and educational visuals. Too often, we scientists are amazed by what an AI art generator can give us without considering how much better the results, more educational and perspective-shifting the process, and greater the impact if we’d worked with a professional creator instead.
Why Scientists Shouldn’t Be Using AI-Generated Art to Share Science
TLDR: AI-generated visuals depicting science can be:
Inaccurate: “style without substance”
Difficult to fact-check and tweak
Biased
Loaded with ethical concerns
Energy-expensive
Oblivious to human emotion
I’m concerned by the amount of AI-generated visuals I see in online science content. (And equally by how often these AI-generated visuals aren’t labeled as such.) The most obvious concern is that these visuals are inaccurate. Generative AI models can produce widely inaccurate depictions of basic anatomy, for example. These models can “hallucinate” and spread misinformation, even when they have access to and pull their content from reliable sources.
Recently, I wrote an article for an institutional blog on the role of microglia — immune cells in the brain — in dementia. I used a smoke alarm analogy to describe how microglia can set off a reaction (calling in overzealous firefighters) that may inadvertently cause more damage. I submitted microscope visuals of microglia with the piece, but the editor came back with this AI-generated image as the header image.
You don’t have to look closely to see the issues with this visual. The ear is blending into part of the brain. The brain anatomy is entirely off, not to mention the bones that look like mechanical pieces. On top of that, this visual is scary and terminator-like — how could this possibly set a good tone for a nuanced scientific piece on the complex roles of our immune systems in fighting disease?
I couldn’t find an alternative visual that still reflected the fire analogy and that the editor liked. So, I drew my own visual using an iPad and Apple pencil.
I am no artist — my illustration skills are rudimentary. But I still think this works a lot better than the original visual.
AI-generated scientific illustrations are difficult to “fact-check” and tweak to correct inaccuracies or misrepresentations. AI image generators don’t do well with fine-detail refinements — things most artists can do easily. I’ve tried before to tell ChatGPT and DALL-E to simply replace one color for another in a generated image. The model couldn’t compute such a simple change.
I gave ChatGPT a prompt to create a visual using the abovementioned analogy. I went through six different iterations. Every time I tried to fix one thing (like more accurate brain anatomy), another error popped up. I mean, in some visuals, the firetrucks are shooting fire instead of putting the fire out… (talk about the AI model not understanding context!)
There are serious issues with all of these visuals, no matter how “glossy” they look at first glance. But I’d also argue that fixing the issues with these visuals is more work than creating something accurate by hand in the first place.
AI-generated images are loaded with ethical issues. Without creators ' consent, they vacuum up online visual content to “learn” from, producing amalgamations of all these images based on our prompts. In doing so, they also vacuum up any biases or stereotypes embedded in those source visuals. Even if we account for these biases in our AI generation prompts, we still risk erasing diverse cultures and visual identities in favor of an averaged “glossiness.”
“Using Midjourney [...] we attempted to invert [global health] tropes and stereotypes by entering various image-generating prompts to create visuals for Black African doctors or traditional healers providing medicine, vaccines, or care to sick, White, and suffering children. However, despite the supposed enormous generative power of AI, it proved incapable of avoiding the perpetuation of existing inequality and prejudice.” - Reflections before the storm: the AI reproduction of biased imagery in global health visuals, The Lancet
AI art generators aren’t human — they don’t have emotions or consider the audiences' feelings. They’d be okay with making scarily hyperrealistic or negative-toned visuals for cancer patient education despite evidence that these types of visuals may increase stress and health literacy barriers. (For a beautiful human take on educational materials about cancer, check out the work of this cartoonist).
Relationship-building is a key component of effective science communication. We simply can’t rely on AI-generated content to facilitate these relationships.
Style without Substance
Art is powerful. We process visual information more quickly than text-based information, using areas of our brain that are also key for emotional processing. Images can impact our emotions and physiological state. They are often more memorable than text-based information, and a quality visual can automatically make content seem more true and trustworthy.
Visual elements can give science communication content style, that elusive mix of accuracy, beauty, enjoyment, provocativeness or inspiration, openness, and cultural relevance. Think of an eye-catching website or comic you’ve seen that got you interested or educated you about a new area of science, for example — what kind of visual style did it have? The style likely fit the content and uniquely suited your needs. According to illustrious science communication researcher Massimiano Bucchi, style can signal quality in science communication. It can serve as a proxy for various dimensions of quality.
Style without substance is a unique concern for AI-generated science art. In the past, we could often recognize poor-quality content or misinformation by its lack of style (think of spam emails littered with typos and weird test adjustments or a sketchy website with odd choices of colors, layouts, and pop-ups). But generative AI is upping the ante, whether it’s producing deepfakes or highly detailed but inaccurate 3D medical art. AI-generated visuals can give content a surface-level illusion of style.
AI-generated content can make it easier for people to fall for misinformation about science. But let’s turn that around… What if, to help people identify quality content about science online, we foster greater media literacy skills and discernment of AI-generated content? This is certainly a goal of many modern educators. Let’s even assume that in the future, these efforts are successful. What happens when you pair your science communication with AI-generated visuals that lack true style (god forbid the visuals include scientists with six fingers and odd-looking faces)? You’ve just lost public trust. (I know you’ve lost mine.)
“Malicious intent aside, as AI generated images are usually of high graphical quality and often visually stunning, inaccuracies can creep in and be overlooked at first glance by authors and reviewers alike.” - Can artificial intelligence help for scientific illustration? Details matter
Ok, But Can We Use AI Responsibly for SciComm?
Professional artists and storytellers reading this may point out that generative AI tools can help them improve their craft—if they use them responsibly. I don’t disagree. Like a top-of-the-line camera is only worth its salt in the hands of an expert photographer, AI tools are best leveraged by creatives with knowledge of visual design principles.
“AI can also assist in generating ideas, concepts, and visualizations, freeing up creatives from repetitive tasks and allowing them to focus on more meaningful and innovative aspects of their work.” - Navigating Generative AI in Ethical Visual Communication
AI-generated science visuals need to go through the same human design workflow that gives any other science communication content its quality:
Background research and establishment of goals and audience for the visuals
Concept input from audiences and stakeholders
Collaborative ideation and brainstorming, with a process to ensure all voices are valued and heard
Iterative creation by expert creatives, with rounds of feedback and editing
Quality assurance: Review and vetting by domain experts
AI can assist science writers and artists by generating early ideas and inspiration. It can be great at curating and summarizing data like meeting notes and background interviews during the creation process. I use AI in this way as a science writer — asking ChatGPT to create bullet points from my interviews with scientists and brainstorm ideas for me that I then decide to incorporate or not (honestly, usually not). Ideally, AI can enhance creativity and productivity.
But using AI image creation tools as a quick fix to insert visuals into science communication materials at the last minute is not ok.
“Science communicators and visualizers will need to act as curators, using their knowledge and experience to select, refine, and validate AI-generated content. Our role will evolve to focus more on ensuring the accuracy and appropriateness of the visuals we create.” - AI for Science, Sayo Studio
Takeaways
Ultimately, generative AI is a tool that can be helpful for visual science communication, just as tools and technologies like iPad drawing apps, Photoshop, Canva, Biorender, 3D art software, and many other tools have been. But unlike these other tools, DALL-E, Midjourney, and other AI art generators can hallucinate and create high-gloss “finished” visual products that are too easy to trust and yet difficult to fact-check and tweak.
AI-generated science visuals are prone to bias and inaccuracies. They pose issues regarding ownership and copyright, and their creation process might even involve theft from hard-working real-life creatives. Used by scientists or scientific institutions without professional creatives in the loop, these AI-generated visuals threaten to devalue scientist-artist partnerships and erode public trust.
What can you do? Critically consider the pros and cons of using generative AI tools in every use case. Consider if AI is genuinely helping you produce better science communication and art, considering your audiences’ wants and needs. If not, the stakes are too high… collaborate with other humans first.