AI in the News

Some AI tools are making newsrooms more efficient; others are generating incorrect headlines and news summaries, presenting new information literacy challenges.

Last November, Apple’s new Apple Intelligence feature notified many iPhone users that the New York Times had reported that Israeli Prime Minister Benjamin Netanyahu had been arrested. In December, the BBC lodged a complaint with the company after Apple Intelligence cited a report from the news organization claiming that Luigi Mangione—the man arrested for the murder of United Healthcare CEO Brian Thompson—had shot himself. A couple of weeks later, the service incorrectly notified users that recently retired tennis superstar Rafael Nadal had come out as gay. In January, Apple temporarily suspended the feature’s news and entertainment summaries to address these and other inaccuracies.

People who have been using consumer-facing generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, or Perplexity AI may be familiar with these types of “hallucinations”—when a prompt causes an AI tool to fabricate a false but plausible-sounding response or even create fake citations. Software engineers are on the case, but Apple’s recent mishaps illustrate the emerging challenges librarians and the public face as AI generates incorrect information or is used maliciously to sow misinformation and disinformation.

Last month, the BBC published the results of a study in which ChatGPT, Copilot, Gemini, and Perplexity were prompted to summarize 100 news stories. Subject experts found that 51 percent of the summaries had “significant issues of some form,” while 19 percent of AI answers to questions about the news that cited BBC content had introduced factual errors, and 13 percent of the quotes sourced from BBC articles “were either altered from the original source or not present in the article cited.”

But major news organizations have been incorporating commercial and proprietary AI tools into their editorial processes for several years, with publishers generally maintaining that the technology frees up staff to conduct more complex reporting. For example, when the Washington Post launched its AI-powered Heliograf tool in 2016 to bolster the paper’s coverage of the Rio Summer Olympics—including reporting on medal results and providing information for the paper’s social media feeds and live blogs—Jeremy Gilbert, then-director of strategic initiatives at the paper, said it would help Post reporters focus on more in-depth coverage of the events.

“The Olympics are the perfect way to prove the potential of this technology,” Gilbert explained at the time. “In 2014 [at the Sochi Winter Olympics], the sports staff spent countless hours manually publishing event results. Heliograf will free up Post reporters and editors to add analysis, color from the scene, and real insight to stories in ways only they can.” Use of the tool has since expanded into coverage of other data-heavy stories including economic reports, election results, stock market trends, weather reports, and more.

Similarly, in 2014 the Associated Press (AP) began using natural language generation technology from Automated Insights paired with data from Zacks Investment Research to boost its coverage of earnings reports from 300 to over 4,000 each quarter. Instead of “spending a great deal of time focusing on the release of earnings and hammering out a quick story recapping each one…our journalists will focus on reporting and writing stories about what the numbers mean and what gets said in earnings calls on the day of the release, identifying trends and finding exclusive stories,” Lou Ferrara, then-VP and managing editor said in an announcement. AP continues to use Wordsmith by Automated Insights for earnings reports and sports summaries.

Eighty-five percent of respondents to a 2023 global survey of journalists said that their newsrooms were using or experimenting with generative AI tools, and 14 percent of respondents said they weren’t sure. In “Generating Change: A global survey of what news organisations are doing with AI” by Prof. Charlie Beckett, leader of the London School of Economics’ (LSE) JournalismAI project, and Mira Yaseen, lead researcher for JournalismAI—the report covering the survey results—respondents say they are using generative AI for a variety of tasks, including email composition, generating infographics or article summaries for their organization’s social media channels and newsletters, brainstorming, rephrasing sentences, evaluating content quality, creating headline suggestions, repurposing content for different audiences, detecting biases, and search engine optimization.

And news organizations and individual journalists are using AI-powered tools to summarize or translate documents, analyze data, monitor websites and their own inboxes to receive alerts on news of interest, extract transcripts and insights from video and audio recordings, and more.

AI tools “can make journalists much more efficient and effective, but I can also see that it’s actually going to make [us] have to think ‘what is the human value that we add?’” Beckett said during a November 2024 webcast, “AI and the Future of News: Revolutionizing Production, Editing, and Dissemination in Journalism,” presented by Reuters. “What is the reporting, the witnessing, the empathy that we add to our journalism?”

BEING THERE

Aside from larger questions about empathy, one of the most important roles journalists can perform with AI in its current state is providing oversight when they or their organization use AI. As Hope Kelly, assistant professor, online learning librarian for Virginia Commonwealth University Libraries (VCU) notes, problems such as Apple’s incorrect headline notifications happen in the absence of that oversight.

“At the heart of those examples is the fact that there were fewer humans involved,” Kelly tells LJ. “My biggest concern with the current news environment is that there are fewer thinking people who are connected to human reality then there are bots generating simulated observations. With the predictive nature of these models, we end up with what can be excellent summaries [that] just don’t stand up to observable reality.... Having that discernment is so essential, and that’s going to be something that we need to push [for] as consumers of information and as advocates for accuracy, authenticity, and everything that we would expect from a reputable source.”

Journalists stressed the need for a “human in the loop” approach to AI in their responses to LSE’s global survey, but more than 60 percent expressed concern “about the ethical implications of AI integration for editorial quality and other aspects of journalism,” and 82 percent were concerned that AI technologies would further commercialize the journalism industry. One respondent wrote, “I think it’s going to result in a lot of mass-produced clickbait as news organizations compete for clicks. We will not be participating in that contest.” Another wrote, “If journalists rely on AI for content creation the same way as influencers do, it will be a huge threat to the industry. There have to be rules and boundaries.”

Librarians interviewed for this feature agreed that AI can be ethically used for brainstorming or otherwise getting started with the writing process, or to correct grammar or spelling or even help with copy editing after something is written. But they emphasized the need for transparency and clarity regarding whether something was written by a person or an AI bot.

“Getting started doing your research, looking for flaws in an argument is probably a fine way to do it,” says Nathan Flowers, head of systems—librarian and professor—for Francis Marion University’s James A. Rogers Library.

“What you’re going to hear a lot in the coming years—partly because it’s a copyright thing—is ‘human authorship,’” Nick Tanzi, assistant director of the South Huntington Public Library (SHPL), NY, tells LJ. “There’s going to be a dividing line. There are varied opinions on AI, but I think most people would say if I use Grammarly to correct my spelling, human authorship is still there. But what is the dividing line? At what point are you uncomfortable, and at what point is [there] a legal consequence where it’s no longer human authored, it’s AI authored? That dividing line is what we’re going to be arguing about—and I think it’s healthy to argue about—as we define our relationship with AI. But authenticity is the key.”

When anyone claims authorship of a work, Kelly says the ethical use of AI should be straightforward. “I want folks to speak for themselves,” she says. “We have our own voices. We want to be heard and to articulate our own perspectives when we write, and we don’t want generative AI to speak for us.”

Trevor Watkins, teaching and outreach librarian for George Mason University, VA, says, “One of the things I try to impress on students is that you don’t want to depend on it.” Watkins, who also has a background in software engineering, is already fielding questions about AI and requests for input on AI projects from journalism students in George Mason’s communication department. In information literacy courses, he has begun including an exercise in which half of the class is asked to write a couple of paragraphs on a topic, and the other half is asked to use a generative AI tool to compose a couple of paragraphs on the same topic. During the next class, students work through which is which as a group. “Some of the students are able to figure it out because they’ve started using [AI] more,” he says. “They’ve started to see the patterns.”

TROUBLE WITH THE VOLUME

Watkins notes that “journalism was disrupted by social media. We went from a system of controlled information from traditional news media to anyone and everyone can report on anything. What was great about that is also what’s bad about it—some of the news that wasn’t being reported on in the past is being reported on now and has been over the last six or seven years—however, you have to determine what’s real and what’s fake, and there are a lot of people who don’t know the difference…. It’s important to not just read something and take it” at face value, Watkins says. “You always have to read it and be cognizant of the fact that it could be AI generated. You have to be skeptical.”

While software engineers are constantly working to improve generative AI, Tanzi also notes that many news organizations, authors, and publishers have begun fighting copyright battles with the companies behind these tools and blocking them from crawling their content. Notably, the New York Times sued both OpenAI and Microsoft for copyright infringement in 2023 over the companies’ use of their articles to train ChatGPT and Copilot. “This has its own unintended consequences,” Tanzi says. “We’ve all experienced when good journalism is behind a paywall; you see a really good article and you can’t read it because you don’t have a subscription…. When the New York Times —established legacy media—cuts off access, you’re now training these models on lower quality content. Now your end users are consuming worse or less trustworthy information” when they prompt a generative AI tool.

Separately, AI has compounded the problem of deliberately created disinformation. People can now easily use AI to generate photorealistic images, “deepfake” videos and audio recordings of famous people—or, increasingly, non-famous people—and convincing fake news articles. Sometimes they’re harmless memes; sometimes it’s malicious.

Flowers says “the amount [of misinformation] that they can create now is orders of magnitude larger. It’s just going to get more and more prevalent and it’s going to get more and more convincing.”

Many respondents to the LSE JournalismAI survey also expressed concern about the risk of generative AI tools increasing the scale and volume of misinformation and fake news, with one respondent writing, “Gen-AI will allow the production and distribution of disinformation at a scale we haven’t seen before—this will potentially impact news consumption, but also send people to more trusted sources.”

MATTER OF TRUST

Librarians, of course, can help patrons find those trusted sources, and offer information literacy instruction to help patrons navigate this changing landscape. Fortunately, while the volume of misinformation is an emerging challenge, most traditional information literacy advice still applies.

“It doesn’t seem like a whole new world to me in terms of information literacy,” Kelly says. There are “persistent models for teaching information literacy skills [and] many of these models bear out as strong—not always complete, not always perfect, but strong, in evaluating generative AI outputs the same way we would evaluate human generated outputs.”

South Huntington Public Library used this viral image of Pope Francis to explain AI image generatorsto patrons

The lateral reading approach—evaluating the credibility of information by checking multiple sources—is “essential,” Kelly says. This also involves “thinking about the biases of both the author or the outlet and then our own biases and relationship to those things.”

That libraries offer access to high quality sources for news and other current information—many of which might be too expensive for patrons to buy or subscribe to—shouldn’t be overlooked, she notes. “If we’re suggesting to folks, ‘Go to reputable sources,’ it gets back to one of our main missions in the library world—making access to those high-quality resources a reality.”

Watkins also says that lateral reading remains a vital technique for assessing information but cites recent U.S. Department of Education statistics to note that about 130 million adults in the country—half of its adult population—have low general literacy skills. “You have to crawl before you walk,” he says. “Everyone is using social media to gather their information. If you can’t even read, or you don’t know how to look for [more authoritative] information, how can you understand what’s AI generated?”

SHPL began its mission to help its patrons better understand AI by creating an AI user group open to all staff, “because there’s nothing worse than having to having to learn and teach [a topic] at the same time,” Tanzi says. The library now offers in-person classes introducing patrons to AI and demonstrating AI tools. And the library’s newsletter occasionally runs articles explaining AI. For example, when an image of Pope Francis in a large white puffer jacket went viral on social media two years ago, the library used it as an opportunity to run an article explaining AI image generators. In addition to demystifying the technology and helping patrons better identify misinformation, the library is also aiming to help patrons understand AI-powered scams, Tanzi says.

There may also be potential for library partnerships with local news organizations. In 2022, research and advocacy organization Library Futures, with the help of program design consultants Hearken and MakeWith, conducted a three-month pilot program with the Albany Public Library (APL), NY, and the Times Union, focused on public-powered journalism. This reporting methodology does not involve AI, but it encourages local news outlets to seek input on what information readers, listeners, and/or viewers want to know. In some cases, members of the local community can be further engaged by having someone who has questions about a specific topic interview a reporter covering that topic, or even accompany the reporter as they are researching, helping them understand the process of reporting news and increasing trust in local journalism.

“As local news and journalism shifts and changes and starts to find new models, I think that libraries and particularly nonprofit newsrooms are a really natural fit,” says Jennie Rose Halperin, executive director of Library Futures. “The more that people know and encounter journalists in their communities, the more likely they are to trust the news.”

WHAT TO DO?

Flowers says that he has friends and colleagues who are “vehemently” opposed to AI on grounds including its potential uses for academic dishonesty or copyright infringement, or its potential to replace jobs. Regardless of those opinions, he says, librarians have a professional responsibility to learn about this technology.

“We have to be aware of it and understand how it works and what it’s capable of in order to effectively give recommendations to the people who are coming to us” with questions, Flowers says. “We don’t have all of the answers, but we should know where to look for the answers. And this is going to be something we’re going to be queried about a lot going forward. It’s a form of information, right? On a basic level, AI is generating information—whether it’s right or not. Librarians aren’t necessarily supposed to be the arbiters of authority, but we’re supposed to be aware of how information is organized and used.”

In some ways, the recent growth of consumer-facing AI tools is similar to the initial growth of the internet, Tanzi says. In the mid- to late-1990s, librarians were giving patrons basic tips like using .edu or .gov sites to find reliable data and authoritative information, he notes as an example. Now, with AI, librarians should be prepared to help patrons create better generative AI prompts and coach them to be skeptical. “You don’t want to be the source of your own misinformation,” Tanzi says. “Teaching patrons how to prompt better is going to get them better information. Teaching them to be skeptical about what they see [is also important]. Those first page Google results are not what they used to be anymore.”

Whether it begins with a staff AI user group like SHPL’s or a LibGuide like the one Kelly and colleagues wrote for VCU, libraries should be finding ways to prepare for these types of patron questions about AI and the current news environment.

LSE’s JournalismAI global survey report concludes with a list of “Six Steps Towards an AI Strategy for News Organisations”—a solid list of guidelines that many types of organizations could repurpose.

  • Get informed: Learn more about AI. Numerous online resources are available, including LSE’s JournalismAI Starter Pack. The report itself includes a glossary, a thorough list of linked citations, and recommendations for additional readings and resources. Many librarians suggest simply using AI tools more frequently to learn their current capabilities and limitations.
  • Broaden AI literacy: The report states that everyone within a news organization needs to understand the components of AI that are impacting journalism “because it will impact…everyone’s job.” The same could be said for libraries and many of the fields in which library patrons work.
  • Assign responsibility:Have someone on staff monitor AI developments within your organization but also more broadly, and have them keep an organizational conversation going about AI.
  • Test, iterate, repeat: To use AI ethically, the report says organizations should “experiment and scale but always with human oversight and management. Don’t rush to use AI until you are comfortable with the process. Always review the impact.”
  • Draw up guidelines: Authoring “general or specific” guidelines offers an opportunity for an organizational learning process when all stakeholders are engaged in their creation. “Be prepared to review and change them over time,” the report suggests.
  • Collaborate and network: A library specialty, the report suggests partnering with other local institutions that are working with AI, including local colleges and universities, businesses, and newsrooms to learn more and help each other.

“Whether you are excited or appalled at what [generative] AI can do, this report makes it clear that it is vital to learn and engage with this technology,” the report states. “It will change the world we report upon. It needs critical attention from independent but informed journalists.”

Author Image
Matt Enis

menis@mediasourceinc.com

@MatthewEnis

Matt Enis (matthewenis.com) is Senior Editor, Technology for Library Journal.

0 COMMENTS
Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.
Fill out the form or Login / Register to comment:
(All fields required)

RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?