You are currently browsing the tag archive for the ‘algorithms’ tag.
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
For a few hours today, Uber users could view their passenger rating thanks to a how-to posted by Aaron Landy. Uber gives both passengers and drivers ratings, probably by averaging the post-ride ratings each gets, and they affect whether riders can get picked up and whether drivers keep their jobs.
Passenger ratings like these raise two kinds of concerns: first, that opaque and inaccessible metrics don’t allow for recourse or even explanation; and second that driver ratings aren’t very consistent or reliable raw material for those metrics.
You hear stories from people who missed a pickup because of buggy notifications, for example, and those people all of a sudden just can’t catch a cab. Any kind of technical error can skew the ratings, but because they’re invisible they’re also treated as infallible.
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
Yesterday, YouTube proudly announced on its blog that it had improved its “Content ID” system, which allows rights holders to automatically detect uploaded content that contains potentially infringing works, by introducing a new appeals process:
Users have always had the ability to dispute Content ID claims on their videos if they believe those claims are invalid. Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute. When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.
In addition, YouTube claims to have made its algorithms “smarter” to reduce the number of unintentional Content ID claims:
Content owners have uploaded more than ten million reference files to the Content ID system. At that scale, mistakes can and do happen. To address this, we’ve improved the algorithms that identify potentially invalid claims.
Recently Google announced an extension to its “Transparency Report“, which now also includes a section on requests to remove search results that link to material that allegedly infringes copyrights. Last month, Google processed 1,294,762 copyright removal requests by 1,109 reporting organizations, representing 1,325 copyright owners. The Figure below illustrates how the number of requests has increased between July 2011 to mid May 2012.
The growing number of removal requests points to the relevance of search technology as a means for copyright enforcement. Since for many Internet users what is not found by Google appears to be non-existent, removing search results from Google’s results lists is obviously a powerful tool for private copyright enforcement. However, several downsides are connected with such private copyright enforcement practices: