You are currently browsing the tag archive for the ‘algorithms’ tag.

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

For a few hours today, Uber users could view their passenger rating thanks to a how-to posted by Aaron Landy. Uber gives both passengers and drivers ratings, probably by averaging the post-ride ratings each gets, and they affect whether riders can get picked up and whether drivers keep their jobs.

UberlogoPassenger ratings like these raise two kinds of concerns: first, that opaque and inaccessible metrics don’t allow for recourse or even explanation; and second that driver ratings aren’t very consistent or reliable raw material for those metrics.

You hear stories from people who missed a pickup because of buggy notifications, for example, and those people all of a sudden just can’t catch a cab. Any kind of technical error can skew the ratings, but because they’re invisible they’re also treated as infallible.

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Yesterday, YouTube proudly announced on its blog that it had improved its “Content ID” system, which allows rights holders to automatically detect uploaded content that contains potentially infringing works, by introducing a new appeals process:

Users have always had the ability to dispute Content ID claims on their videos if they believe those claims are invalid. Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute. When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.

In addition, YouTube claims to have made its algorithms “smarter” to reduce the number of unintentional Content ID claims:

Content owners have uploaded more than ten million reference files to the Content ID system. At that scale, mistakes can and do happen. To address this, we’ve improved the algorithms that identify potentially invalid claims.

Read the rest of this entry »

In the series “algorithm regulation“, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Google’s recent move to advertise its practice of removing search results that link to material that allegedly infringes copyrights (see “New Layer of Copyright Enforcement: Search“) demonstrates the importance of a web service’s back-end for issues such as free speech or (actual) enforcement levels in certain fields of regulation such as copyright. In his contribution to the “Social Media Reader” (2012, edited by Michael Mandiberg), Felix Stalder puts this insight into a broader context when reflecting on “the front and the back of the social web“. He criticizes the “overly utopian” picture of the new digital possibilites drawn by scholars such as Clay Shirky, author of “Here Comes Everybody“, which he attributes to “focusing primarily on the front-end” of web technologies:

The social web enables astonishingly effective, yet very lightly organized cooperative efforts on scales previously unimaginable. However, this is only half of the story, which plays out on the front end. We cannot understand it if we do not take the other half into account, which play out on the back-end. New institutional arrangements make these ad-hoc efforts possible in the first place. There is a shift in the location of the organizational intelligence, away from the individual organization towards the provider of the infrastructure. It is precisely because so much organizational capacity resides now in the infrastructure that individual projects do not need to (re)produce it and thus appear to be lightly organized. If we take the creation of voluntary communities and the provision of new infrastructures as the twin dimensions of the social web, we can see that the phenomenon as a whole is characterized by two contradictory dynamics. One is decentralized, ad-hoc, cheap, easy-to-use, community-oriented, and transparent. The other is centralized, based on long-term planning, very expensive, difficult-to-run, corporate, and opaque. If the personal blog symbolizes one side, the data-center represents the other.

Read the rest of this entry »

Recently Google announced an extension to its “Transparency Report“, which now also includes a section on requests to remove search results that link to material that allegedly infringes copyrights. Last month, Google processed 1,294,762 copyright removal requests by 1,109 reporting organizations, representing 1,325 copyright owners. The Figure below illustrates how the number of requests has increased between July 2011 to mid May 2012.

The growing number of removal requests points to the relevance of search technology as a means for copyright enforcement. Since for many Internet users what is not found by Google appears to be non-existent, removing search results from Google’s results lists is obviously a powerful tool for private copyright enforcement. However, several downsides are connected with such private copyright enforcement practices:

Read the rest of this entry »

The Book

Governance across borders: transnational fields and transversal themes. Leonhard Dobusch, Philip Mader and Sigrid Quack (eds.), 2013, epubli publishers.
June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

Copyright Information

Creative Commons License
All texts on governance across borders are licensed under a Creative Commons Attribution-Share Alike 3.0 Germany License.