UNITED STATES
bookmark

Scientists face access backlash over Facebook research

It was a remarkable result. By manipulating the news feeds of thousands of Facebook users, without their knowing consent, researchers working with the goliath of social media found that they could spur a significant, if small, effect on people’s behaviour in the world beyond bits.

[This is an article from The Chronicle of Higher Education, America’s leading higher education publication. It is presented here under an agreement with University World News.]

The year was 2010. The scientists were poking at voting patterns in the United States midterm elections. And when the results came out two years later, in Nature, there was barely a peep about questionable ethics.

As you may have heard, a more recent study, conducted by Facebook and co-designed by researchers at Cornell University, has kicked off a vigorous debate about the influence of Facebook’s algorithms over our lives and, more specifically to academe, whether researchers should be more careful in how they collaborate with the social media giant.

Backlash

The response to the study, which examined how positive or negative language spreads in social networks, has been blistering, and has raised credible criticisms about whether internet users should be informed about experiments that test profound questions about human behaviour.

But the backlash, researchers say, also poses the risk that the corporations that govern so much of our day-to-day experience online will decide there’s less benefit in allowing academic scientists to have access to their internal data.

When it comes to a choice between enlightening society or expanding the bottom line, for corporations, there’s rarely a question of which side will win.

“The main consequence is that academics will be wary of collaborating with Facebook," said Michelle N Meyer, an assistant professor of bioethics at the Icahn School of Medicine at Mount Sinai in New York.

Facebook, she added, "will not have an incentive to collaborate with researchers motivated by publications". The research will still happen, but in private. It’s not going to be published and discussed.

"I’m definitely worried that’s going to be the upshot," added David MJ Lazer, a political scientist at Northeastern University and leader in computational social science.

While the research that has come through Facebook has not fundamentally changed our view of the world, Lazer said, it’s been clever, and many have viewed it as a down payment on more work to come.

‘You are likely to be next’

Briefly stated, over one week in early 2012, Facebook randomly selected a cohort of nearly 700,000 users and divided them into four groups. Like every other person on the service, the users were exposed to a custom suite of posts on their news feeds, dependent on Facebook’s algorithms.

But for two of the groups, Facebook tweaked the algorithm, making it less likely that the subjects would see posts automatically classified as containing either positive or negative language. (Such classification itself is an error-prone endeavor.)

The researchers found that users who saw fewer positive posts were less likely to post something positive, and vice versa.

That was surprising: existing research had seemed to indicate that when people on Facebook compare the positive spin that friends post about their lives with the reality of the day-to-day, they can come away disappointed or sad.

The new study showed that similar emotional language could be contagious, but with a vanishingly small effect. Over the next week, as one of the researchers posted (on Facebook), there was one fewer emotional word posted for every thousand words measured.

A data scientist at Facebook, Adam DI Kramer, conducted the research, collaborating with a Cornell researcher, Jeffrey T Hancock, and his former postdoc on its design and subsequent analysis. Since the Cornell duo did not participate in data collection, the university’s institutional review board concluded that the study did not merit oversight from its human-subjects panel.

The team published the study in early June in the Proceedings of the National Academy of Sciences, a premiere journal, and made broad claims in it: not only were the tests influencing language choices, they said, but also the emotions of the Facebook users.

That was enough.

Manipulating emotions?

Unlike the 2010 voting study, the notion that researchers were manipulating emotions, even if it was a questionable conclusion, found a press and a public already wary of Facebook’s influence.

In particular, the idea that Facebook was intentionally making people ‘sad’ allowed easy access to deep fears about scientists and corporations damaging society. The experiment wasn’t about expanding democracy or encouraging organ donation, another prominent Facebook study.

The study became a catalyst for discussing the role Silicon Valley companies play in modern society.

Already, one prominent bioethicist, New York University’s Arthur L Caplan, has called for curbing the intrusion of internet companies into our lives:

"When entities feel entitled to experiment on human beings without informed consent, without accountability to anyone but themselves, that’s when bad things happen to research subjects," Caplan wrote in a commentary with Charles Seife, a journalism professor at NYU.

"And it’s now clear that if we don’t insist on greater regulatory oversight of their ‘research’, you are likely to be next."

The limits of informed consent

In particular, the reaction may spur an important debate on the limits of informed consent, a bedrock principle in experiments involving human beings.

Such consent is not universal; for example, in political science, it’s common to test various methods of voter turnout, including through social pressure, without consent. But those small-scale trials have never attracted much attention, while any story involving Facebook is likely to be covered on the internet by reporters hungry for clicks.

Still, Meyer said, there is probably a middle ground between relying on Facebook’s obscure data-use policies to serve as informed consent, as the latest study did, and a full-scale campaign anytime the company seeks to conduct an experiment.

Facebook could send those 700,000 people a notice describing potential research in a general way, and allow them to opt in or out. It’d be tricky to do without biasing the research, but so be it. Even so, Meyer added, "I bet a bunch of people would have still been upset."

In the backlash to the study, there’s also been the assumption of a neutral world that, in many ways, no longer exists. We are constantly being sold and manipulated, Meyer said.

This is an age when the characters on cereal boxes make eye contact with children in the supermarket. Internet companies and political campaigns regularly experiment on their users, a process known as A/B testing, and that’s not going to change.

Better instead to use this storm as an opportunity for dialogue, Lazer said. Someone should put together a well-vetted platform for ethicists to establish what is acceptable and unacceptable for social media research, as more and more young researchers pour into the field. Such a gathering might keep Facebook around, and could serve society as a whole.

So, Facebook: Don’t unfriend academe just yet.