How do we teach students to recognise when news is fake?
[This is an article from The Chronicle of Higher Education, America’s leading higher education publication. It is presented here under an agreement with University World News.]
That’s one finding from a memorable study, released as a working paper in 2017, that documented how three groups of ‘experts’ – among them historians and Stanford University undergraduates – evaluated online sources.
Members of both groups tended to dig into a site, according to the study, by Sam Wineburg, a professor at Stanford’s Graduate School of Education, and Sarah McGrew, his doctoral student. The students and historians followed many of the tips that students are usually given for conducting online research, like examining a site’s domain name. But the tactics didn’t work.
The study is part of an emerging body of research that grapples with an urgent question: How can we stop people from believing the lies that proliferate online? Typical digital-literacy efforts, this research suggests, are insufficient. The Stanford study offers an alternative: Imitating its third group of experts, professional fact-checkers. Rather than ticking off a checklist, in other words, students need to get into a critical frame of mind. Early evidence suggests it’s a promising approach.
Where the scholars and students stumbled, the fact checkers succeeded. They took a very different approach, leaving the site in question to find out what the rest of the internet had to say. The researchers call this “lateral reading,” because, instead of scrolling up and down a page, the fact-checkers opened up a bunch of tabs. This, the researchers wrote at the end of the paper, “is what we should be teaching students to do as well.”
So the authors designed and tested a lateral-reading training along with colleagues from the Stanford History Education Group. The findings from their recently published field study are “optimistic,” Wineburg said. The study shows that a small intervention — two 75-minute lessons — made a modest but statistically significant difference in how students performed in a test of their online reasoning, even a month later, compared with that of students in a control group.
Using the internet successfully isn’t merely about choosing from a set of vetted sources. It’s about avoiding some truly inaccurate sources, including slanted ones that are designed to look legitimate. That’s not a skill students have traditionally been taught. But the research suggests it’s one they can learn.
The ‘Craap Test’
Most efforts to help students make sense of the internet, however, still have a foot in the print-centric, curated world of the traditional library. One well-known example is the ‘Craap Test’, which instructs students to consider a source’s currency, relevance, authority, accuracy, and purpose. Some instructors, librarians, and other experts had long worried that those methods were no match for the internet.
Mike Caulfield is among those long-time worriers. Caulfield, now director of blended and networked learning at Washington State University in Vancouver, had worked for years to develop what he calls “digital literacy for civic engagement,” teaching students how to navigate a public square that has migrated to the internet.
But getting people to take the challenge seriously was like “rolling a rock up a big hill,” he said – until the runup to the 2016 presidential election. All of a sudden, “the case that we had been trying to make for all that time was really, unfortunately being made by current events,” Caulfield said. “It was no longer a complex case to argue.”
Just weeks after the election, the Stanford group released a study showing that students in middle school, high school, and college were “easily duped” online. Caulfield mentioned it in a blog post he published the next month arguing that digital literacy was in need of a paradigm shift.
When people see the spread of misinformation online, Caulfield said in an interview, their response is often: “We need more digital literacy.” What they fail to recognise is “that people are getting the wrong digital literacy, that the digital literacy that people are getting is actually making things, in some cases, worse.”
In his blog post, Caulfield cited another writer’s illustration of how a hoax website about an endangered “tree octopus” would pass a conventional web-literacy checklist. One thing those checklists miss, he argues, is the need for students to apply existing knowledge. The big clue that the website is a hoax isn’t anything about the site per se, he wrote, but “the improbability of a cephalopod making the leap to being an amphibious creature without significant evolutionary changes.” Octopuses, in other words, don't climb trees.
Wineburg saw the blog post and got in touch. He and Caulfield now see each other as kindred spirits. “We kind of came to it from different directions but were really seeing the same thing,” Caulfield said. “And really working on very similar solutions.”
He now runs a project called the Digital Polarization Initiative at the American Association of State Colleges and Universities. The project is testing a digital-literacy curriculum, broadly similar to the Stanford group’s approach, on nine pilot campuses. The findings have not yet been published, but Caulfield describes them as positive.
Caulfield has also written a free textbook, Web Literacy for Student Fact Checkers, that lays out what he calls “the four moves”: check for previous work, go upstream of the source, read laterally, and circle back.
The notion of reading laterally, the textbook notes, comes from Wineburg’s work. In the “experts” study, Wineburg and McGrew set out to determine how skilled users judged information online. The research team watched participants complete a series of online tasks, such as evaluating two articles on bullying – one on the site of the American Academy of Pediatrics, the profession’s major organisation, and the other on the site of the American College of Pediatricians (ACP), a conservative splinter group that has been labelled a hate group by the Southern Poverty Law Center for its positions on LGBTQ rights.
Participants in the study were asked to search as they normally would, and to explain their thinking as they did. The American College of Pediatricians has an official-sounding name and an official-looking website. Forty percent of the historians equivocated when asked which of the two sites was the more reliable, while 10% said the splinter group’s site was the better source. Nearly two-thirds of the students identified the group’s site as better.
But none of the fact checkers fell for it. Rather than burrowing into the ACP’s website, they left it and opened new tabs to seek information they trusted. This behaviour, the researchers observed, is fundamentally at odds with methods like Craap, which depend on a close examination of the source at hand. Here, they suggested, was a method suited for the Wild West of the internet.
Thinking like a fact checker
During and after the presidential election, colleges’ digital-literacy efforts had to contend with the problem of fake news. At San Jose State University, librarians and faculty members in the English department who run the writing programme had a similar concern, said Ann Agee, an online-learning librarian there, in an email. They wondered if students were equipped to “evaluate the news they received from online sources like Reddit, Instagram, and Facebook”?
In the course of their research, both Agee and one of the English professors she was working with came across Wineburg’s work, which offered a model for exploring their questions. They asked for permission to use his materials.
Wineburg had a counterproposal. Could his team run its experiment at San Jose State? They wanted to test lateral reading on a campus that was more typical than Stanford’s.
The researchers taught two 75-minute sessions on evaluating the credibility of online information to an experiment group of randomly chosen sections of a San Jose State course in critical thinking and writing. The sessions focused on the skills used by fact checkers, and taught students to investigate the questions: “Who is behind this information?” “What is the evidence?,” and “What do other sources say?”
Before and after the sessions, the researchers gave these students, as well as those in a control group, tests designed to directly measure their ability to evaluate online sources. One exercise, for example, asked them to determine whether a photo purporting to show flowers with “nuclear birth defects” growing near the wrecked Fukushima nuclear-power plant, in Japan, provided “strong evidence” of conditions there, and to explain their reasoning.
Students in the experiment group performed better on the post-session test, which was given a month after the instruction was delivered. Those results were statistically significant, the paper says, but still leave room for improvement.
To the research team, the new study provides proof of concept for their idea that teaching students to think like fact checkers can succeed where ticking off a checklist has failed. More research is needed, the paper notes, to determine how generalisable the results are, as well as how long they last.
Still, Wineburg is convinced that it’s “educationally negligent” to ask students to conduct online research without teaching them something like this approach. He would like to see colleges cover it in a semester-long course.
Caulfield isn’t sure that’s necessary. He thinks training along those lines could be swapped into any course in which the checklist model is used now, and practised later with relevant assignments.
Even if colleges take up this training, and even if it’s a success, where does that leave everyone else? Plenty of the people fumbling around the internet now won’t attend college, or already did. Some of them are faculty members. Still, Caulfield said, the great thing about educational interventions is their potential for a spill-over effect. Some students who learn these methods, he said, become “evangelists” for them.
Caulfield recalled that when one of his children was in preschool, students were sent home with fresh vegetables that they had helped grow. The idea, he said, was that the pupils weren’t the only ones who’d learn about healthful food — their families benefited, too. Savvy internet use, perhaps, could spread in the same way.
Beckie Supiano writes about teaching, learning, and the human interactions that shape them. Follow her on Twitter @becksup, or drop her a line at email@example.com.