Local media outlets have a powerful question on their minds: Is artificial intelligence working for or against them?
Local journalists like Judith Davidoff, editor of the Isthmus, are wary of its presence in the newsroom.
Davidoff said AI has been a tool that she’s used, and she believes many of her colleagues feel the same way.
“I haven’t asked it to generate content,” Davidoff said, however she uses it to help create headline ideas for her articles. Even then, however, she never directly quotes from AI.
One way that AI has been a tool for Davidoff and her colleagues is transcription software.
AI-based programs like Otter.ai create transcripts of audio recordings and allow for easy searching throughout the transcript to locate specific quotes — something that can save invaluable time in journalists’ frequently packed work schedules.
Even then, Davidoff conceded that AI transcription services can be harmful and she doesn’t allow herself to rely on them too much. She said she typically takes notes by hand as well as recording to rely on supplementarily, adding that oftentimes Otter doesn’t fully correctly transcribe interviews and isn’t always reliable on its own.
Despite this, she still notes how useful these AI tools can be for her and her coworkers at the Isthmus.
AI has the power to be a great tool to journalists, however its implementation can also pose risks. Newsrooms have a greater need than ever for resources that will allow them to use AI responsibly and ethically.
One such resource is training programs and workshops, something that Tomas Dodds, professor of Journalism and founder of the Public Media Tech Lab at UW-Madison, aims to provide to local news companies.
“In the newsroom, you don’t know how your colleagues are using AI,” Dodds said, adding that the Public Media Tech Lab would facilitate discussions in the newsroom where coworkers could disclose how they use it and create AI usage policies from these discussions.
According to Dodds, it is important for newsrooms, no matter how small, to have access to AI technology that aligns with their individual values.
For Dodds, this often comes in the form of personalized, in-house Large Language Models — a generative AI model akin to ChatGPT — that draws from a newsroom’s article database to serve as a tool for journalists within that given news organization.
However, there is a disparity between news companies that can and can’t afford these technologies, with local newsrooms often having the least funding to implement. Dodds said newsrooms lacking access to these in-house AI resources is very problematic, largely for creating a sense of what he describes as professional dissonance.
“There is this mismatch that happens to journalists between the way that they think they should be doing their job and the way that they actually get to do their job,” Dodds said.
Davidoff mentioned a Wisconsin State Journal article in July that was taken down due to “unauthorized use of AI”, as a warning of how easy AI can turn to malpractice, even without knowing.
“It’s an organizational failure of a media organization that is not having a conversation with their workers about how they’re going to use artificial intelligence,” he said, adding that if news organizations don’t have these conversations with their journalists, the Brayton Lot piece could absolutely become more than just a journalistic anomaly.
While local media personnel and experts would like to believe incidents are rare and cautioned against, there is still uncertainty of how effective safeguards are.
“We have so few people in newsrooms all over,” Davidoff said, mentioning that AI misuse is an additional threat due to this as news companies’ lack robust editing teams due to being understaffed.





