Published On: Tue, May 23rd, 2023

AI and news literacy: 3 key takeaways


Here’s the latest installment of a regular feature I’ve been running for several years: lessons from the nonprofit News Literacy Project (NLP), which aims to teach students and the public how to sort fact from fiction in our digital — and contentious — age. With the spread of rumors, baseless accusations and conspiracy theories on social and partisan media sites, there has never been a time in recent U.S. history when this skill has been as important as it is now.

The material in this post comes from the Sift, the organization’s newsletter for educators, which has nearly 22,000 subscribers. Published weekly during the school year, it explores timely examples of misinformation, addresses media and press freedom topics, looks at social media trends and issues, and includes discussion prompts and activities for the classroom. Get Smart About News, modeled on the Sift, is a free weekly newsletter for the public.

NLP has a free e-learning platform, Checkology, that helps educators teach middle and high school students how to identify credible information, seek reliable sources, and know what to trust, what to dismiss and what to debunk.

It also gives students an appreciation of the importance of the First Amendment and a free press. Checkology and all of the NLP’s resources and programs are free. Since 2016, more than 42,000 educators and 375,000 students in all 50 states, the District of Columbia and more than 120 countries have registered to use the platform.

Here’s material from the May 22 issue of the Sift, the last of the 2022-23 academic school year:

Dig Deeper: This guide offers a full list of classroom-ready resources compiled chronologically from the Sift during the 2022-23 school year. It includes featured texts, objectives and news literacy topics. It also aligns Dig Deeper resources to related Checkology lessons. Think of this as your Dig Deeper roundup!

Year in review: AI and news literacy

There was an abundance of teachable moments in the news this school year. While social media, press freedom and misinformation are recurring topics in the Sift, this year also saw a rapid advancement in artificial technology — a topic that dominated headlines and sparked intense public interest following the release of ChatGPT.

A popular text-generating AI tool, ChatGPT reached more than 100 million users within two months of its launch on Nov. 30 and raised important news literacy questions for educators, students, journalists and the broader public.

Many of the most-clicked story links in the Sift this school year were about the rise of AI, such as the first syndicated AI-generated political cartoon and the first AI-generated presidential election ad.

Generative AI tools have also sparked a public debate over how the technology should or shouldn’t be used in school, as the most-clicked story this year examined, as well as how it affects journalism and misinformation. As we conclude for the school year, here are three key takeaways in news literacy about AI:

1. AI has the potential to accelerate misinformation. Generative AI chatbots generate impressively accurate, nuanced text responses within seconds, but they’re also prone to error and have been shown to fabricate nonexistent articles from legitimate news outlets. Other AI tools can generate synthetic images, voices and video. Experts worry that bad actors could use these tools to generate disinformation and spread it at an alarming scale.

2. AI tools can combat misinformation. It’s not all doom and gloom. Although AI can be used to create disinformation, it can also potentially help combat it by automating fact-checking.

3. AI will affect journalism indefinitely. While some reputable news organizations have been using AI software for years (to parse financial reports and sports scores, for example), the shift in sophistication in publicly accessible AI tools will probably impact journalism practices and processes in ways yet to be fully realized.

Newsrooms are already grappling with how to use AI while keeping their audience informed about these decisions. In January, it was revealed that CNET had been quietly publishing AI-generated stories without disclosing this practice to its readers. The stories contained several inaccuracies. Meanwhile, Wired became one of the first newsrooms to develop a generative AI policy to be transparent with its readers.

The emergence of ChatGPT and other AI tools shows that news literacy education is more important than ever. We’ll be following this technology as it evolves and will be back in the fall with more resources and news literacy insights.

“Catching bad content in the age of AI” (Tate Ryan-Mosley, MIT Technology Review).

“Woodward and Bernstein: Watergate reporters warn of the limitations of AI” (Emma Petrie, BBC News).

“News literacy in the age of AI” (NLP’s AI page).

“Chatbots are supercharging search: Are we ready?” and “Will chatbots change how journalism is practiced?” (NLP’s Is that a fact? podcast).

Love RumorGuard? Receive timely updates by signing up for RG alerts here.

You can find this week’s rumor example to use with students in these slides.

George Soros death hoax latest entry in conspiratorial rabbit hole

NO: Liberal billionaire philanthropist George Soros did not die of a heart attack, as his Twitter account confirmed on May 15.

YES: This rumor began spreading with a baseless claim from an ordinary social media account picked up by disreputable publications.

NO: No credible standards-based news outlet reported this claim.

NewsLit takeaway: Death hoaxes are frequently shared online as a form of engagement bait, but they often also serve as entry points to conspiratorial ideas and beliefs.

In the case of Soros, a death hoax is only the latest in a long line of falsehoods aimed at him, beginning in the 1990s. The outlandish rumors and fabrications typically stem from Soros’s donations to liberal causes, are often rooted in conspiracies about global elites, and regularly include antisemitic tropes. If Soros had died, the event would have created headlines from credible, standards-based organizations.

When salacious rumors spread on social media, it’s always a good idea to be patient and wait for a credible source to confirm or debunk a claim.

AI image generators stirrup trouble for horse history

NO: This is not a genuine photograph of the world’s largest horse.

YES: This was created with the AI image generator Midjourney in April.

YES: According to Guinness World Records, the world’s largest horse actually was 7 feet, 2.5 inches tall, named Sampson, and lived in the 1850s.

NewsLit takeaway: AI image generators already have been used to create fabricated photos related to current events, and now the technologies are creating fake historical photos, such as this AI image of a giant horse.

While social media users may be able to spot these photo fakeries with close examination of the image (AI still has a difficult time rendering fingers), viewers should not discount the tried-and-true method of considering the media’s source. Was the image shared by a trusted source or by an account seeking engagement? Does a reverse image search reveal fact-check articles or any additional context? In this case, the image can be traced back to a subreddit dedicated to images created with Midjourney.

• Montana became the first state to ban TikTok when the measure was signed into law on May 17. The governor said the law is to protect residents’ private data, but TikTok users who are suing the state over the ban say it violates their First Amendment rights.

• Local reporting from two news outlets was key in debunking a sham viral story that falsely claimed that migrants had displaced homeless veterans at a New York hotel.

• A journalist’s background, expertise in a subject and information about their newsgathering process are the kinds of details that “enhanced bylines” will convey in certain New York Times online stories after the paper’s Trust team “found that readers trust journalism more when they know the process of how it was produced.”

BBC News is also looking to build trust with audiences by giving a more behind-the-scenes look at its journalism through BBC Verify, a new team of about 60 journalists who will cover disinformation; they also will showcase how the news organization verifies the information that it shares.

• Is summer vacation canceled? Nope! That’s just a rumor students debunk while playing a video game created by Canadian Broadcasting Corporation’s Kids News that aims to teach essential critical thinking and news literacy skills along the way.

• A Missouri high school student who filmed her geometry teacher using a racist slur was suspended for three days, according to her lawyer, raising concerns about whether the punishment conflicts with the student’s First Amendment rights.

• In the ongoing public debate about objectivity, New York Times Publisher A.G. Sulzberger examines the value of journalism in this essay.

Here are some other installments:

Banning clothes with political slogans at school, and other news literacy lessons

Why some marginalized people distrust the news

Cherry-picking climate data, limiting social media and more news literacy lessons





Source link

Comments are closed.

Most Popular News

Local Business Directory, Search Engine Submission & SEO Tools