You are currently browsing the tag archive for the ‘algorithm regulation’ tag.

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

In the last entry of this series I have described how YouTube’s Content ID system effectively re-introduces registration requirements into copyright, even though international treaties such as the Berne Convention forbid such requirements. With its most recent additions to YouTube’s rights management infrastructure, YouTube owner Google brings the former’s rights clearing services to a whole new level.

Previously, creators using copyrighted material such as contemporary pop music in one of their videos could only try to upload their videos and hope for the best (i.e. no recognition by the Content ID algorithm) or the second best (i.e. recognition by the Content ID algorithm but acceptance/monetization by rights holders). In any case, creators could only definitely know after making and uploading a video whether and how YouTube’s algorithms would react.

In a recent blog post, YouTube has announced substantial changes to this system:

But until now there was no way to know what would happen if you used a specific track until after you hit upload. Starting today, you can search the YouTube Audio Library to determine how using a particular track in your video will affect it on YouTube, specifically if it will stay live on YouTube or if any restrictions apply. You can uncross those uploading fingers now!

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

The most important international copyright treaty, the Berne Convention for the Protection of Literary and Artistic Works, is quite clear with regard to registration requirements for copyright protection in its Article 5 (2)

“The enjoyment and the exercise of these rights shall not be subject to any formality”

copyright-symbol

The copyright symbol in Arial

In other words, for the 168 countries covered by the Berne Convention, registration provisions are not an option.* In the digital era, this ban is unfortunate for a number of reasons: Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

For a few hours today, Uber users could view their passenger rating thanks to a how-to posted by Aaron Landy. Uber gives both passengers and drivers ratings, probably by averaging the post-ride ratings each gets, and they affect whether riders can get picked up and whether drivers keep their jobs.

UberlogoPassenger ratings like these raise two kinds of concerns: first, that opaque and inaccessible metrics don’t allow for recourse or even explanation; and second that driver ratings aren’t very consistent or reliable raw material for those metrics.

You hear stories from people who missed a pickup because of buggy notifications, for example, and those people all of a sudden just can’t catch a cab. Any kind of technical error can skew the ratings, but because they’re invisible they’re also treated as infallible.

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Facebooks Edge Rank Algorithm (Source: http://goo.gl/zTrTbe)

Facebooks Edge Rank Algorithm (Source: http://goo.gl/zTrTbe)

In a recent issue of the Proceedings of the National Academy of Sciences of the USA (PNAS), Adam Kramer and others published an article on “Experimental evidence of massive-scale emotional contagion through social networks” with data derived from the world’s largest social network Facebook. The researchers was given the permission to manipulate the Facebook newsfeed in order to test how differences in terms of emotional direction of postings, i.e. more “happier” or more “sadder” updates, impact on people’s status updates. The study delivered two main results: First, emotions are “contagious” in that more happy postings inspired more happy postings and vice versa. Second, fewer emotional posts (in either direction) reduces posting frequency of Facebook users.

The publication of these results has incited furious debates with regard to research ethics, mainly criticizing that Facebook should have asked users to (more) explicitely consent in taking part in such an experiment. Susan Fiske, the Princeton University psychology professor who edited the study for publication, is quoted in an Atlantic article subtitled “It was probably legal. But was it ethical?” as follows:

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Over at orgtheoryElizabeth Popp Berman agrees that “the whole idea is creepy” but also argues that Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

google-good-or-badWith a market share of over 90 percent in Europe, the Google search engine and its search algorithm respectively decide what is relevant on an issue and what not. Any information that is not placed on the first few pages of Google’s search results will hardly ever be found. On the other hand, personal information that is listed prominently in these results may haunt you forever. The latter issue was recently tried by the European Court of Justice (ECJ), who ruled (C-131/12) that

the activity of a search engine consisting in finding information published or placed on the internet by third parties, indexing it automatically, storing it temporarily and, finally, making it available to internet users according to a particular order of preference must be classified as ‘processing of personal data’

and that, under certain not very clearly spelled out conditions relating to the data subject’s rights to privacy,

the operator of a search engine is obliged to remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person.

By crafting such a “right to be forgotten”, the ECJ effectively regulates Google’s search algorithms. In other words, we can observe the ECJ regulating Google’s algorithmic regulation. In response to the ruling, Google has already set up an online form for deletion requests, stating that  Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

fair-search-europe-logoA common complaint of Google’s competitors in fields such as Internet maps is that Google’s search algorithm favors its own services over those of competitors in its search results. For instance, the FairSearch coalition led by Microsoft, Oracle and others calls for more transparency in displaying search results and harshly criticizes Google:

Based on growing evidence that Google is abusing its search monopoly to thwart competition, we believe policymakers must act now to protect competition, transparency and innovation in online search.

Given Google’s market dominance in Europe with over 90 percent in core markets such as Germany, such allegedly discriminatory practices led to an antitrust investigation by the European Commission (EC). However, providing reproducable evidence for such discriminatory search results is difficult. Google is not only constantly changing its search algorithm (see “Algorithm Regulation #4: Algorithm as a Practice“) but also increasingly personalizing search results; both these characteristics of contemporary search algorithms make it difficult to compare search results over time.

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Google Logo

Raúl Ochoa, CC-BY-NC-ND

Today I stumbled via twitter upon the website “Google Algorithm Change History” that chronologically documents all changes of the core search algorithm publicly announced by Google. The most striking feature of the site is the sheer number of changes:

Each year, Google changes its search algorithm up to 500 – 600 times. While most of these changes are minor, every few months Google rolls out a “major” algorithmic update that affect search results in significant ways.

In other words, it does not make sense any more to speak of “the Google algorithm” because there is not an algorithm but there are algorithm-related practices. In line with the practice turn in contemporary social theory (see Schatzki et al. 2001) and similar to perspectives such as strategy-as-a-practice, we might require a practice perspective on algorithms to better understand how algorithm regulation works.

When looking at the frequent – not to say constant – changes in Google’s search algorithm, it also becomes obvious how misleading regular comparisons with the Coca-Cola formula such as the following in a Wall Street Journal blog are:

Google is very cagey about its search algorithm, which is as key to its success as Coke’s formula is to Coca-Cola.

The algorithm of Google search is not like a static formula and therefore it should not be treated as a trade-secret either. Actually, if the search algorithm where a mere formula, we would see much more competition in search. Google is practicing algorithmic search and it is these continuous changes, which mostly rest on access to unimaginably big data sets of search and usage practices, that are difficult to imitate for competitors.

With regard to the issue of algorithm regulation, a practice perspective sensitizes for phenomena such as regulatory drift. In a paper on transnational copyright regulation, Sigrid Quack and myself describe regulatory drift as “changes in meaning and interpretation, which result from continuous (re-)application of certain legal rules” (see also Ortmann 2010). In the context of algorithms, the term might refer to the sum of continuous revision and (seemingly) minor adaptation practices, which in the end lead to substantial and partly unintended changes in regulatory outcomes.

(leonhard)

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Yesterday, YouTube proudly announced on its blog that it had improved its “Content ID” system, which allows rights holders to automatically detect uploaded content that contains potentially infringing works, by introducing a new appeals process:

Users have always had the ability to dispute Content ID claims on their videos if they believe those claims are invalid. Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute. When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.

In addition, YouTube claims to have made its algorithms “smarter” to reduce the number of unintentional Content ID claims:

Content owners have uploaded more than ten million reference files to the Content ID system. At that scale, mistakes can and do happen. To address this, we’ve improved the algorithms that identify potentially invalid claims.

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Earlier this year, Google revealed that it routinely removes search results that link to material allegedly infringing copyrights, thereby following removal requests of copyright holders (see  “New Layer of Copyright Enforcement: Search“). Since this announcement, the number of removed search results per month has quadrupeld (see Figure below).

Yesterday, Google announced that in addition to removing search results it is going to also adapt its ranking algorithm:

Starting next week, we will begin taking into account a new signal in our rankings: the number of valid copyright removal notices we receive for any given site. Sites with high numbers of removal notices may appear lower in our results.

As in discussed in the first entry of this series on algorithm regulation, the technological layer of regulation is becoming increasingly important for copyright enforcement. But Google’s move to tinker with its most precious asset, the search algorithm, also evidences that technological regulation of this kind may directly result from stakeholder negotiations.

Read the rest of this entry »

In the series “algorithm regulation“, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Google’s recent move to advertise its practice of removing search results that link to material that allegedly infringes copyrights (see “New Layer of Copyright Enforcement: Search“) demonstrates the importance of a web service’s back-end for issues such as free speech or (actual) enforcement levels in certain fields of regulation such as copyright. In his contribution to the “Social Media Reader” (2012, edited by Michael Mandiberg), Felix Stalder puts this insight into a broader context when reflecting on “the front and the back of the social web“. He criticizes the “overly utopian” picture of the new digital possibilites drawn by scholars such as Clay Shirky, author of “Here Comes Everybody“, which he attributes to “focusing primarily on the front-end” of web technologies:

The social web enables astonishingly effective, yet very lightly organized cooperative efforts on scales previously unimaginable. However, this is only half of the story, which plays out on the front end. We cannot understand it if we do not take the other half into account, which play out on the back-end. New institutional arrangements make these ad-hoc efforts possible in the first place. There is a shift in the location of the organizational intelligence, away from the individual organization towards the provider of the infrastructure. It is precisely because so much organizational capacity resides now in the infrastructure that individual projects do not need to (re)produce it and thus appear to be lightly organized. If we take the creation of voluntary communities and the provision of new infrastructures as the twin dimensions of the social web, we can see that the phenomenon as a whole is characterized by two contradictory dynamics. One is decentralized, ad-hoc, cheap, easy-to-use, community-oriented, and transparent. The other is centralized, based on long-term planning, very expensive, difficult-to-run, corporate, and opaque. If the personal blog symbolizes one side, the data-center represents the other.

Read the rest of this entry »

The Book

Governance across borders: transnational fields and transversal themes. Leonhard Dobusch, Philip Mader and Sigrid Quack (eds.), 2013, epubli publishers.
August 2017
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

Twitter Updates

Copyright Information

Creative Commons License
All texts on governance across borders are licensed under a Creative Commons Attribution-Share Alike 3.0 Germany License.