You are currently browsing the category archive for the ‘Copyright Regulation’ category.
Today the European Parliament passed with an overwhelming majority – 531 voting in favor, 11 against and 65 abstentions - a compromise proposal for a directive on certain permitted uses of orphan works. In Europe, orphan works are a much greater problem than, for example, in the USA, because European copyright has for a much longer time featured automatic protection. As a consequence, finding rights holders is more difficult than in the USA, where works had to be registered until the end of the 1980s. And due to ever-longer protection terms, the number of orphan works is going to increase even further every year, making access to our common cultural heritage increasingly difficult.
The so-called orphan works directive addresses the problem by allowing public-sector institutions such as libraries, museums, archives, educational establishments and film heritage institutions to digitize and publicize orphan works after conducting a “diligent search”. What constitutes a “diligant search” is outlined in more detail in a “Memorandum of Understanding on Diligent Search Guidelines for Orphan Works”.
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
Earlier this year, Google revealed that it routinely removes search results that link to material allegedly infringing copyrights, thereby following removal requests of copyright holders (see “New Layer of Copyright Enforcement: Search“). Since this announcement, the number of removed search results per month has quadrupeld (see Figure below).
Yesterday, Google announced that in addition to removing search results it is going to also adapt its ranking algorithm:
Starting next week, we will begin taking into account a new signal in our rankings: the number of valid copyright removal notices we receive for any given site. Sites with high numbers of removal notices may appear lower in our results.
As in discussed in the first entry of this series on algorithm regulation, the technological layer of regulation is becoming increasingly important for copyright enforcement. But Google’s move to tinker with its most precious asset, the search algorithm, also evidences that technological regulation of this kind may directly result from stakeholder negotiations.
Last week the European Parliament rejected the Anti-Counterfeiting Trade Agreement (ACTA, see also “ACTA as a Case of Strategic Ambiguity“) with 478 voting against the treaty, 39 in favour and 165 MEPs abstaining. Commenting on this outcome, Joe McNamee from the ACTA-critical NGO European Digital Rights (EDRi) stated that “ACTA is not the end. ACTA is the beginning.” In his optimistic account, the rejection of ACTA has substantially changed the debate on intellectual property rights regulation in Europe:
Thanks to SOPA, European citizens better understood the dangers of ACTA. Thanks to the anti-ACTA campaign, it would be politically crazy for the Commission to launch the criminal sanctions Directive. Thanks to ACTA, there is broad understanding in the European Parliament of just how bad IPRED really is and any review now, if the Commission has the courage to re-open it, is more likely to improve the Directive rather than increase its repressive measures.
However, a recent op-ed by Canadian copyright scholar Michael Geist, illustrates why ACTA’s contents might not be so dead after all. Referring to leaked documents of negations between Canada and the EU Commission on the “Comprehensive Economic and Trade Agreement” (CETA):
According to the leaked document, dated February 2012, Canada and the EU have already agreed to incorporate many of the ACTA enforcement provisions into CETA, including the rules on general obligations on enforcement, preserving evidence, damages, injunctions, and border measure rules. One of these provisions even specifically references ACTA.
Recently Google announced an extension to its “Transparency Report“, which now also includes a section on requests to remove search results that link to material that allegedly infringes copyrights. Last month, Google processed 1,294,762 copyright removal requests by 1,109 reporting organizations, representing 1,325 copyright owners. The Figure below illustrates how the number of requests has increased between July 2011 to mid May 2012.
The growing number of removal requests points to the relevance of search technology as a means for copyright enforcement. Since for many Internet users what is not found by Google appears to be non-existent, removing search results from Google’s results lists is obviously a powerful tool for private copyright enforcement. However, several downsides are connected with such private copyright enforcement practices:
In European regulatory discourse as well as copyright research, there is a debate whether the US Fair Use model is better suited to deal with innovation in general and digital challenges in particular than the European system of exceptions. It makes sens to discuss the state of the art of research on Fair Use in the US and what we can learn in Europe.
In the course of a visit in Europe, Pamela Samuelson from UC Berkeley Law School & School of Information gave an interesting talk about “Fair Use in Europe? Lessons from the US and Open Questions”. Her main message could be summarized in two points: First, flexible regulation such as the US Fair Use clause is better suited to rapid technological changes than the comparably static system of exceptions and limitations in European copyright. To illustrate this point, Samuelson mentioned several innovations such as scholarly data-mining in Google Book Search (Ngram Viewer)* or Brewster Kale’s “Wayback Machine” that would have been much more difficult to realize without the Fair Use exemption.
Second, Samuelson explicitly did not recommend to get rid of or avoid specific exceptions all together; rather, keeping limitations and exceptions that provide legal certainty would be desirable even when introducing some form of fair-use-like clause into the European copyright system.
Let’s talk about Porn. According to Wikipedia, „[d]epictions of a sexual nature are as old as civilization“. And of course, paraphrasing Walter Benjamins famous essay, works of porn have changed in the age of mechanical reproduction. New means for (re-)producing works of art – printing press, photography, video, the Internet – have always and early on been used for producing and distributing pornographic works. And in the Internet age, porn has become more widespread than ever. Wondracek et al. (2010, PDF) report in their paper entitled “Is the Internet for Porn?” that 42,7% of all Internet users view pages with pornographic content. Also, popularity of peer-to-peer file-sharing technologies is connected to access to pornographic content (see Coopersmith 2006, PDF).
In spite of these well-known facts regarding the importance of pornography in the context of new copyright-related technologies, talking about the role of both producers and consumers of pornographic content in regulatory struggles is uncommon in journalistic and scholarly analyses alike. As a first step to acknowledging this role, I just want to list examples I can recall where porn producers and/or users have been influential in the field of copyright-related struggles: Read the rest of this entry »
Sigrid Quack and Leonhard Dobusch comment on the recent developments in the German “Piratenpartei” around the Pirate Party Convention 2012.
With the German Pirate Party continuously rising in national polls – currently ranging between 10 and 13 percent (see Figure below) – media attention on the party’s convention last weekend had reached a new height.
And this media coverage is increasingly becoming transnational. Germany’s largest weekly Der Spiegel devoted an extensive feature article in English to the phenomenon, trying to explain questions such as “Why the Pirates Are Successful”:
“This is precisely the Pirates’ biggest attraction: transparency and participation, as well as a healthy dose of freshness and otherness. This sometimes makes the Pirates seem childishly naïve and chaotic, and yet they seek to make do without back-room backslapping and conventional political smoothness.”
But also criticsm is voiced in the recent coverage. The Economist, for example, calls Pirates in its recent printed edition “slightly barmy” and the Sueddeutsche Zeitung published a series of articles on unfortunate comparisons of the Pirate Party’s rise with that of the NSDAP by the secretary of the Berlin Pirate caucus (German article) and some right wingnuts in the party who among other statements denied the Holocaust (German article). Read the rest of this entry »
This post is provided by our guest blogger Moritz Heumer.
The winning streak of the German Pirate Party is continuing with the latest success of entering the Saarland parliament. Recent polls for the national election suggest that the pirates might reach 11 percent of votes. The continued success of the pirates raises doubts about claims of their gains being entirely based on protest voters. What are the supporters of the Pirate Party then voting for? In this blog I will argue that the Pirates are addressing highly topical issues that are not dealt with by other parties. By doing so they appeal to primarily young voters, especially the digital natives. Based on an analysis of the German Pirate Party’s wikis, I was able to trace its links to other actors which are part of a social movement with transnational scope. This social movement is aiming for policy changes in different fields that are connected with issues arising from the digital revolution. The formation of parties is one element of the mobilization repertoire of this movement. The rise and diffusion of Pirate Parties, itself a transnational phenomenon, therefore cannot be understood without connecting to the frame of reference that was created by other actors who previously dealt with similar issues.
In the academic world, the conflict between research institutions and publishers about the latter’s reluctance to embrace open access strategies has been looming for years. While the Internet makes distribution of research much cheaper and easier, subscription fees for the most important journals kept rising. Already in 2009, the MIT Faculty had unanimously adopted a university-wide Open Access rule (“Universities as Copyright Regulators: Power and Example“). In 2012, we can finally observe open battles on the issue.
After earlier this year more than 10.000 researchers had joined the boycott of Elsevier (see also “Elsevier Withdraws Support for Research Works Act, Continues Fight Against Open Access“), last week Harvard University issued an official “Memorandum on Journal Pricing“. After criticizing the “untenable situation” that “many large journal publishers have made the scholarly communication environment fiscally unsustainable and academically restrictive”, the memorandum suggests the following 9 points to faculty and students (F) and the Library (L): Read the rest of this entry »