Use ChatGPT in science communication. Or don't.

ChatGPT is a fairly new online tool that can generate text and mimic a chat conversion similar to those you have with another human being.

ChatGPT is a chat bot based on artificial intelligence, AI. The chat bot is trained on large amounts of texts originally written by humans and based on all this input it generates a response to any question or comment you might have. The response is composed by adding together words pulled out of texts written by humans, so the response itself will also seem human-like. This is really smart when you need inspiration for grocery shopping, story telling or deciding whether to paint your wall green or blue, but the smartness decays rapidly when it comes to any fact based text.

In this post, I want to share my thoughts as an astrophysicist and science communicator on the topic, because this is a debate I keep getting swirled into. Let’s dive in.

How ChatGPT Works

In many ways, the launch of ChatGPT marks human kind’s arrival in the future, and as with anything new and tech’y we need to figure out how to use it to our advantage and how to protect ourselves from potential abuse. The first step to master these overwhelming challenges is to understand at least on a higher level how this new tool works. We need to figure out what we can use it for and, perhaps more importantly, what we can not use it for.

In the engine room of ChatGPT lies a model built on a big pool of texts written by humans. A BIG pool. I don’t know how many texts (I’m not sure anyone outside OpenAI does?), but it seems to be of the order of several terabyte when you ask ChatGPT itself. This is -by rough numbers- enough to include hundreds of thousands, probably even over a million, e-books (assuming 1 ebook to be 2.6 MB which means 1 TB holds over 380 thousand books).

The chat bot was fed this huge amount of text and can thus recognize which words usually go together in a certain context. For example, in many of the books it probably says that the color of a tomato is red, so when you ask “What color is a tomato?” then ChatGPT will easily compose the answer “A tomato is red.” because this is what it likely will find in many of the tomato-related texts.

This is, in very simple terms, how the chat bot works. When questions become more complex than a tomato color, the bot will simply juggle around words to compose sentences that are likely to be found in texts. However, the bot spices up the answers with some degree of “statistical randomness” to make the final sentence sound less .. artificial.

How ChatGPT *Does Not* Work

You will notice (since you are a science fan, otherwise why are you here?) that in the description above is no station of source critique. A language model like ChatGPT does not check facts or reliability of sources. It simply generates sentences based on what has been added in its gigantic pool of input text. However, because much of this input text has already been fact checked by humans when writing the original texts then the answer is likely to not be too wrong. But – there is zero fact checking and any correct answer you get from it when asking a scientific question is pure luck. Or statistics, if you will.

I recently had discussions on social media with public journalists that seem to hold some form of anecdotal evidence that ChatGPT is “doing alright” and that what ever question the journalist submitted was “answered fairly correct”. Yes, of course. It’s bound to. That is how the bot works. Of course it is likely to be correct in most cases, simply because it is based on texts that are written by humans, and humans tend to be fairly rational when they write scientific books. But this does not make the chat bot itself scientific or accurate when it comes to facts.

As some of you know, I run a Patreon for a number of people that can fit around my dinner table. Right now, we are having a moon theme in the monthly posts, and in one of my recent posts I took the opportunity to see what kind of help ChatGPT would serve as in the context of writing a science communication post. Spoiler alert: I ended up quitting using the tool altogether. Not because it was very wrong, but because I never knew when it was wrong (for the reasons mentioned above). It doesn’t matter which question I would ask, I would still have to find reliable sources like NASA, ESA, etc, to confirm or debunk whatever reply I had gotten from the bot. Eventually, I decided it would be easier to skip that middle-step altogether and go straight to the source from the get go.

Despite this decision in my own work, by no means do I advocate against using ChatGPT also in science related questions. Not at all. ChatGPT is amazing for anything that’s not scientific by default, like storytelling, mail composing, editing and formulation, gift ideas and what have you. It’s no doubt an amazing and impressive tool, mainly because it is so easy to use. Plus it’s fun to interact with, which definitely is a feature we need more of in science.

But in order to have good experiences that won’t deteriorate society into a nutcase cluster we need to learn how to use it in scientific contexts – and we can easily learn this because we know how the bot composes its answers.

ChatGPT Can Kickstart Your Scientific Curiosity – Like A Star Wars Movie

ChatGPT should definitely be used as a kick-starter for any scientific topic you are curious about. Ask for 5 facts about space exploration or request a list of largest known asteroids. It will be a fun interaction and you will get some material to base your scientific investigation on. Use whatever response you get as an offset to find source information on the topic at the bigger institutions such as NASA and ESA or wherever you get your factual science fix these days.

If you find a scientific text that is cumbersome to get through, why not ask ChatGPT to reformulate or to summarize in bullets. Maybe have it compose a poem about dark matter (I know a dark matter scientist that actually did this). This will give you a nice condensed piece to continue your knowledge hunt at the primary source: scientific researchers and media outlets of their work. In other words: Use ChatGPT to trigger curiosity, but don’t use it as a source of factual information, because it’s not. Much like a Star Wars movie is not a physics class, ChatGPT is not a scientist.

(Oh, and if you’re a science journalist – please don’t use ChatGPT to write your pieces on scientific research.)

Concerns About ChatGPT In Science Communication

Despite all the praise I have for this piece of engineering, I also have concerns. My biggest concern right now to using a language model such as ChatGPT in science communication is not so much the output as of right now. Most of the answers will be correct, and currently the bot is not, to my knowledge, used widely as a way to spread scientific information.

However, once the bot starts to train on texts generated by an earlier version of itself or other AI text generators, I am concerned for the”factual” texts it spits out. Answers will have factual and scientific language, but they won’t be factual and scientific in themselves. They won’t even be based on a text from a scientist. Instead, it will be based on input from a language model with no source critique in place. Even if a source critique station is added to the text generator that station will likely be based on AI too. Does an AI make mistakes more often than a human? Maybe not, but tracing where the error originated will be opaque and stand in contrast to the option of tracing back to Professor Steve and putting the arguments in context of actual scientific research.

Once AI generated science texts starts to float around online, it will become exponentially more difficult to debunk misinformation and clowny statements wrapped in scientific jargon. Especially in space science, a field that already requires such peculiar knowledge about events and theories. If we don’t understand what the scientists say, then we might be inclined to asking a chat bot, simply because we have exhausted all other practical options. And if we then expose the bot reply as a piece of scientific news, then society will spiral into something that belongs in a fiction book.

I think it is clear as a blue sky that science communicators are needed now more than ever before. But it does worry me if me and my fellow science communicators will be sought out less than ever before, simply because there is a more easy and more fun alternative: the AI chat bot.


If you liked this post, please consider supporting on Patreon.