In the series “algorithm regulation“, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Google’s recent move to advertise its practice of removing search results that link to material that allegedly infringes copyrights (see “New Layer of Copyright Enforcement: Search“) demonstrates the importance of a web service’s back-end for issues such as free speech or (actual) enforcement levels in certain fields of regulation such as copyright. In his contribution to the “Social Media Reader” (2012, edited by Michael Mandiberg), Felix Stalder puts this insight into a broader context when reflecting on “the front and the back of the social web“. He criticizes the “overly utopian” picture of the new digital possibilites drawn by scholars such as Clay Shirky, author of “Here Comes Everybody“, which he attributes to “focusing primarily on the front-end” of web technologies:

The social web enables astonishingly effective, yet very lightly organized cooperative efforts on scales previously unimaginable. However, this is only half of the story, which plays out on the front end. We cannot understand it if we do not take the other half into account, which play out on the back-end. New institutional arrangements make these ad-hoc efforts possible in the first place. There is a shift in the location of the organizational intelligence, away from the individual organization towards the provider of the infrastructure. It is precisely because so much organizational capacity resides now in the infrastructure that individual projects do not need to (re)produce it and thus appear to be lightly organized. If we take the creation of voluntary communities and the provision of new infrastructures as the twin dimensions of the social web, we can see that the phenomenon as a whole is characterized by two contradictory dynamics. One is decentralized, ad-hoc, cheap, easy-to-use, community-oriented, and transparent. The other is centralized, based on long-term planning, very expensive, difficult-to-run, corporate, and opaque. If the personal blog symbolizes one side, the data-center represents the other.

In a way, this analysis of the social web even holds for the open web showcase Wikipedia. While the content of the multilingual and free online encyclopedia is provided by a transnationally dispersed community of volunteers, the back-end in form of the Wikimedia Foundation becomes more and more centralized as the project grows (see “Contours of Future Wikimedia Governance: More Centralized, More Diverse“). As a consequence, I could not agree more with Stalder’s claim that we have to look at the complex interplay between front- and back-end to fully grasp how the brave new digital world works:

All the trapping of conventional organizations with their hierarchies, formal policies, and orientation towards money, which are supposed to be irrelevant on the one side, are dominant on the other. Their interactions are complex, in flux, and hard to detect form the outside.

Technological algorithms are, of course, only one of the back-end dynamics that deserve more attention. But they tend to incorporate what Stalder describes as “a tension at the core of the social web created by the uneasy (mis)match of the commercial interests that rule the back-end, and community interests advanced through the front-end”. In this context, lack of algorithm transparency results from the “structural imbalance between the service providers on the one side, who have strong incentives to carefully craft the infrastructures to serve their ends, and the users on the other side, who will barely notice what is going on, given the opacity of the back-end.”

As solution, Stalder suggests “[a] mixture of new legislation and granting public access to back-end data”. How this access to back-end data could be implemented and whether this access should include algorithm transparency provisions, are interesting and still unanswered questions.

(leonhard)