In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Facebooks Edge Rank Algorithm (Source: http://goo.gl/zTrTbe)

Facebooks Edge Rank Algorithm (Source: http://goo.gl/zTrTbe)

In a recent issue of the Proceedings of the National Academy of Sciences of the USA (PNAS), Adam Kramer and others published an article on “Experimental evidence of massive-scale emotional contagion through social networks” with data derived from the world’s largest social network Facebook. The researchers was given the permission to manipulate the Facebook newsfeed in order to test how differences in terms of emotional direction of postings, i.e. more “happier” or more “sadder” updates, impact on people’s status updates. The study delivered two main results: First, emotions are “contagious” in that more happy postings inspired more happy postings and vice versa. Second, fewer emotional posts (in either direction) reduces posting frequency of Facebook users.

The publication of these results has incited furious debates with regard to research ethics, mainly criticizing that Facebook should have asked users to (more) explicitely consent in taking part in such an experiment. Susan Fiske, the Princeton University psychology professor who edited the study for publication, is quoted in an Atlantic article subtitled “It was probably legal. But was it ethical?” as follows:

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Over at orgtheoryElizabeth Popp Berman agrees that “the whole idea is creepy” but also argues that

“Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state.”

and

“This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers.”

I tend to agree. Algorithms are neither neutral nor static; rather, we can even speak of algorithms as a practice, as is evidenced by Google changing its search algorithm 500 to 600 times a year. And at the heart of this continous tinkering with algorithms lies experimentation, for instance in form of A/B testing. And how big is the difference between A/B testing to choose the best shade of blue and experimenting with the frequency of emotional postings – in the end, there is a long history of research on how different colors affect moods and, thus, employee performance.

All this does not mean that large-scale experiments with algorithms could not pose a problem. As was recently argued by Jonathan Zittrain in an article on “digital gerrymandering”, algorithms might even be used to influence elections. Already in 2010, political scientists had conducted an experiment that demonstrates how differences in Facebook newsfeeds can significantly influence the number of people who turn out to vote. Zittrain thus argues for large web companies to become “information fiduciaries”:

As things stand, Web companies are simply bound to follow their own privacy policies, however flimsy. Information fiduciaries would have to do more. For example, they might be required to keep automatic audit trails reflecting when the personal data of their users is shared with another company, or is used in a new way. […] They would provide a way for users to toggle search results or newsfeeds to see how that content would appear without the influence of reams of personal data—that is, non-personalized.

In a way, this proposal echoes calls for algorithm transparency; in the light of the most recent mood experiments, such regulation would need to not only cover the basic functions of algorithms but would also require transparency with regard to algorithm experiments – at least above a certain threshold of users involved. In any way, we are again in the midst of a discussion on how to regulate the algorithms that regulate us. And I predict this discussion is just getting started.

(leonhard)