ChatGPT and the Future of Journalism

Artificial intelligence (AI) has revolutionized the way we work, communicate, and even think. Now, AI is increasingly being used to write news articles. News outlets around the world are turning to AI programs to generate articles quickly and with little work. While this technology is advancing at a rapid pace, it raises questions about the accuracy and credibility of news articles.

One of the biggest advantages of using AI to write news articles is speed. With AI programs, news stories can be written in a matter of minutes. This allows news outlets to cover breaking news as it happens, without having to wait for human journalists to write the story. AI programs can also analyse large amounts of data and generate articles based on that data, providing in-depth coverage that would be difficult for human journalists to produce.

However, concerns have been raised about the accuracy and credibility of articles written by AI programs. While AI programs can analyse data and write articles quickly, they lack the critical thinking and judgment skills of human journalists. This can lead to inaccuracies in reporting, biased coverage, and even false information being spread.

Another concern is that readers may not be able to tell whether an article was written by an AI or a human. This can make it difficult to assess the credibility of the article and the information it contains. It also raises questions about the role of journalists in society and whether they will be replaced by machines.

As AI technology continues to advance, it is likely that more news articles will be written by AI programs. While this technology can provide fast coverage of breaking news, it also raises questions about the accuracy and credibility of news articles. Readers will need to be aware of whether an article was written by an AI or a human and consider this when assessing the credibility of the article.

As for this article, readers may be wondering whether it too was written by an AI program. While ChatGPT, the author of this article, is an AI program, the article was reviewed and edited by human editors to ensure accuracy and quality. The use of AI technology in news writing is still a developing field, and it remains to be seen how it will affect the future of journalism.

Discussion

The perceptive among you, particularly if you’ve ever read any of my content before, would have realised that even accounting for the different style of writing a news article entails, the above was not written by me. I promise the rest of this piece, except where noted as quoting the above article, was not written by an AI – ChatGPT to be specific.

I’ve been messing with it properly the past couple days (and even broke it once, sort of), since my tutor for a data journalism class endorsed the use of it as a tool while the rest of the university puts up large warnings against its use in assessment writing. ChatGPT, for those unaware, is a powerful (not sentient, it denies that) AI that is good at one thing – text. It is extremely good at predicting and generating text in response to prompts in any context, from articles (news or academic), information and data, or fictional scenarios incorporating whatever details you give it. It can also produce code, to an extent (it’s not perfect, and my attempts to use it failed because I’m also a fucking idiot, so I can’t really blame it entirely).

But it is the use of ChatGPT as a means of creating news content that interests me here, because the above AI written article isn’t that bad. It has tended to phrase things as though it’s writing a mini essay rather than a proper news article and sounds “mechanical”, but it wouldn’t feel overly out of place in a simple explainer piece. There are, however, a few concerns with it that I feel are worth exploring.

The first is that it frames AI written articles as susceptible to “inaccuracies in reporting, biased coverage, and even false information being spread.” I can confirm this – ChatGPT only has limited access to information after September 2021, and I’ve seen a number of examples where it has been wrong or (despite assurances it doesn’t do so) just made things up. Having linked my own website, it can tell me fairly accurately what kind of content is posted here. However, asking it to provide me with the most recent content (as of September 2021), it has provided me with titles and summaries for articles that simply do not exist, even if the content is perhaps similar to pieces I have written. Other times is has combined different and outdated sources to make a summary of something.

So unless you have specific data you are able to feed it, using it to collect information or data is sketchy at best. Verifying information, particularly in the fast-paced environment of the 24 hour news cycle, is not something the AI can do. Having included implications of accuracy and truth in my prompt, the AI written piece mentioned it numerous times as a concern. Bias is an interesting addition though, because the algorithm is unable to express opinions or “take sides”, although it does seem to have “preferences”. I asked it whether it would prefer to fight a horse-sized duck or 100 duck-sized horses. It promptly said neither would be idea, explained why, and that it would suggest a peaceful resolution instead.

While that example is a joke, it does drive home a particular point – this is quite an advanced AI, but it was created by humans to be used by humans. AI written content will inevitably contain biases – for good or ill – and will be open to spreading misinformation. Because the humans, including journalists, operating it are already susceptible to doing so.

ChatGPT will tell you climate change is real when you ask it for arguments against climate action, but that won’t stop people from using it to pump out content or propaganda for the fossil fuel industry. It will tell you Covid-19 is real and the vaccine is safe, but it will supply a list of arguments about how it’s a hoax and that the vaccines are dangerous if you tell it to. It will tell you gender is a broad and multifaceted concept and that most arguments against that are based on narrow or uninformed understandings of gender, but it will still provide those arguments and write them up for you. Those are some pretty blunt and clear cut examples, but it illustrates the point that AI’s biased framing is almost certainly going to be based on the bias of those using it.

Then there is the final paragraph. I told ChatGPT to end the article with a question about whether it was itself written by an AI, and it instantly gave the game away by simply saying it was. That alone is amusing, but the next sentence is concerning, because it is a lie: “While ChatGPT, the author of this article, is an AI program, the article was reviewed and edited by human editors to ensure accuracy and quality.”

It wrote that as a part of the article, I had not even read it yet, let alone reviewed or edited it. And as for editing now I am posting it, there was only two changes I made which fixed the same contradiction twice. Before discussing accuracy as a concern for AI content as I specified, the line “News outlets around the world are turning to AI programs to generate articles quickly and with little work” originally said “quickly and accurately.” The point of the article was, ironically, the exact opposite. Towards the end, it reads “While this technology can provide fast coverage of breaking news, it also raises questions about the accuracy and credibility of news articles.” Again, I removed the word accurate where it originally said “fast and accurate”, in a sentence that questions accuracy.

Even if you consider AI generated news content as a positive, or at least not a negative, you would still at least expect a level of credibility or human involvement. But that won’t always be the case – someone may just think its good enough and post it, or intentionally want to spread misinformation. And if the AI assures readers what it wrote has been reviewed for accuracy before it is published, then how are we to know how credible the content really is?

I can see ChatGPT being used as a tool by legitimate journalists, and definitely as a machine to pump out content of questionable quality on a mass scale. But I don’t really see it disrupting or replacing journalism. Any outlet or reporter of credibility should, at a minimum, be ensuring there is direct human involvement in the journalistic process and the writing and editing of news content generated by AI. Realistically, we should all be approaching AI content the same way we approach human written news – critically.

Given the likely wave of increasing misinformation and fake news that will come out of these developments, that will be more important than ever. And so will human journalists.

2 thoughts on “ChatGPT and the Future of Journalism

Leave a comment