AI in the Newsroom: How AI Disclosure Policies Impact the Reception of Automated News

By Dr. T. Franklin Waddell
News organizations are increasingly experimenting with generative AI. For example, the Associated Press has tested AI in its newsroom for years. More recently, ESPN has been using generative AI to publish news stories about underserved sports.
But despite all of this, readers are still skeptical about AI in the newsroom, believing it to be untrustworthy.
Organizations such as Poynter argue that every newsroom should have an AI ethics policy outlining the rules for using AI in the newsroom. If journalists explain how AI is used ethically, will audiences be more receptive to AI-produced news?
Two Studies Involving AI in the Newsroom
Dr. T. Franklin Waddell from the Department of Journalism at the University of Florida conducted two studies that tested how an AI ethics policy might improve trust in AI-produced news.
An experiment compared whether responses to an AI-written news article would vary if readers were told beforehand that a news organization’s use of AI was always disclosed, AI-written content was fact-checked before publication, and was intended to be helpful to the reader.
Both studies found that AI-written news was seen as more trustworthy and worthy of subscription when accompanied by an AI ethics policy. AI ethics policies also increased perceptions that the news organization was transparent, fact-checked its work, ethical in its use of AI, and had good motives for using AI.
This work shows that audiences do not always assume that journalists use AI responsibly unless the news organization clearly explains it to the reader. Ethics policies, thus, are essential for news organizations that wish to experiment with AI while still maintaining the trust of their audience.
Posted: April 9, 2025
Category: UF CJC Online Blog
Tagged as: artificial intelligence, digital journalism, T. Franklin Waddell