What you need to know
- News outlet CNET published around 75 articles that were generated using an unspecified AI engine.
- Futurism, a separate publication, reported on the use of AI by CNET last week.
- A follow-up article by Futurism highlighted several mistakes that appeared in CNET articles that were generated by AI, despite the fact that those pieces were said to be reviewed by a human editorial staff.
A recent pair of articles by Futurism highlighted some of the weaknesses of artificial intelligence. News outlet CNET generated around 75 articles with an unspecified AI tool. That fact was highlighted by Futurism last week, sparking debate across the web. The fact that CNET did not disclose the program in advance was one of the chief criticisms, along with the fact that the disclosure of each bot-written article was originally hidden.
Following Futurism’s coverage, CNET Editor-in-Chief Connie Guglielmo shared a piece about the use of AI to create articles. The outlet has since made changes, such as altering the previous byline of “CNET Money Staff” to simply “CNET Money.”
Guglielmo claimed that all of the AI-generated articles were “reviewed, fact-checked and edited by an editor,” specifically a human editor. Despite that disclaimer, issues managed to get through the system.
In a follow-up article, Futurism pointed out mistakes within multiple CNET pieces. Below is just one example (emphasis added by Futurism):
“To calculate compound interest, use the following formula:
Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods
For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.“
The financial figures shared within CNET’s original piece are incorrect, as pointed out by Futurism. The example listed would result in a person earning $300 in the first year, not $10,300. That’s a significant difference and one that would likely have been caught by a human writer or editor.
The piece by Futurism illustrates many of the weaknesses of artificial intelligence. Most notably, AI often generates text in a way that sounds authoritative regardless of if the content is factually correct. It also indicates that AI-assisted writing can still have factual errors.
Windows Central take
I have no doubt that artificial intelligence will at some point be able to generate news articles. In fact, Futurism highlighted that the Associated Press has used AI to create articles since 2015. But in those instances AI was used to fill in templates, not create entirely new text.
As someone who has written thousands of news articles, I readily admit that some posts are formulaic. An article about an app update that includes a change log or a Windows 11 Patch Tuesday update could probably be generated with an AI tool. But content that does not fit within tight parameters requires a human touch, at least for now.
I also want to emphasize that no writer is perfect, human or otherwise. I’ve certainly made mistakes and had to update articles accordingly. But I feel like there’s a sense of accountability when a human creates an error. I’ve had bosses message me when I shared a draft with an obvious mistake. I’ve also had several editors work with me over the years to improve my writing.
If AI is to be used in the creation of content, it needs to be monitored strictly to avoid the pitfalls of artificial intelligence. Unfortunately, in the case of some of CNET’s articles, that was not what occurred.