You are currently browsing the category archive for the ‘Copyright Regulation’ category.

The organization Creative Commons, which is responsible for the set of alternative copyright licenses of the same name, was officially founded exactly 10 years ago. Historic documents of meetings in the run-up to founding Creative Commons are still available online at Harvard’s Berkman Center for Internet & Society. At the official Creative Commons birthday party in Germany I had the honor to present some useless historical facts from this and other pre- and post-foundation documents. The slides of my talk are embedded below.

[Update]

A video of my German talk is now available at Vimeo.

(leonhard)

This is a shortened and slightly altered English version of a German blog post at netzpolitik.org.

The video “Gangnam Style” by the Korean rapper Psy is now the most-watched YouTube clip ever with about 870 million views and counting. And while the official version is blocked in the German YouTube version due to the ongoing copyright struggle between YouTube and the German collecting society GEMA (see “Cracks in the Content Coalition“), there are some unblocked copies available at YouTube, as well; besides, browser extensions such as YouTube Unblocker allow watching the original version even in Germany.

The viral success of Gangnam Style not only made Psy world-famous but had also further consequences, as is documented on the Wikipedia page on the “Gangnam Style phenomenon“:

In 2012, the South Korean government announced that “Gangnam Style” had brought in $13.4 million to the country’s audio sector. [...] The British multinational grocery and retailer Tesco reported that its total sales of Korean food had more than doubled as a result of the popularity of “Gangnam Style”.

Read the rest of this entry »

At the heart of culture lies creative recursion: re-applying creative practices to artifacts resulting from previous creative practices. Remix culture could then be defined as processes of creative recursion that make this recursion as such recognizably visible. This is what makes a remix reflexive, as is explained by Eduardo Navas over at remixtheory.net:

[remix] allegorizes and extends the aesthetic of sampling, where the remixed version challenges the aura of the original and claims autonomy even when it carries the name of the original; material is added or deleted, but the original tracks are largely left intact to be recognizable.

As a result, works of remix communicate always and simultaneously on at least two levels: the asthetics of the remix as a new work and its status as a remix, referencing the remixed works. A nice example of the communicative power of remixing as recognizable creative recursion is provided by the most recent election campaign of the Pirate Party of Lower Saxony in Germany. To communicate ‘piracy’ as a brand, the pirate party creatively ‘pirated’ prominent brands.  Find below several of the respective campaign posters, all of which can be found on the campaign portal ideenkopierer.de (“idea copiers”; some of the translations are taken from Torrentfreak):

The tenderest temptation since parties were invented.

We may not have Alps in Lower Saxony, but we want to ensure that students continue to know that cows are not purple. Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Yesterday, YouTube proudly announced on its blog that it had improved its “Content ID” system, which allows rights holders to automatically detect uploaded content that contains potentially infringing works, by introducing a new appeals process:

Users have always had the ability to dispute Content ID claims on their videos if they believe those claims are invalid. Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute. When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.

In addition, YouTube claims to have made its algorithms “smarter” to reduce the number of unintentional Content ID claims:

Content owners have uploaded more than ten million reference files to the Content ID system. At that scale, mistakes can and do happen. To address this, we’ve improved the algorithms that identify potentially invalid claims.

Read the rest of this entry »

Today the European Parliament passed with an overwhelming majority – 531 voting in favor, 11 against and 65 abstentions – a compromise proposal for a directive on certain permitted uses of orphan works. In Europe, orphan works are a much greater problem than, for example, in the USA, because European copyright has for a much longer time featured automatic protection. As a consequence, finding rights holders is more difficult than in the USA, where works had to be registered until the end of the 1980s. And due to ever-longer protection terms, the number of orphan works is going to increase even further every year, making access to our common cultural heritage increasingly difficult.

The so-called orphan works directive addresses the problem by allowing public-sector institutions such as libraries, museums, archives, educational establishments and film heritage institutions to digitize and publicize orphan works after conducting a “diligent search”. What constitutes a “diligant search” is outlined in more detail in a “Memorandum of Understanding on Diligent Search Guidelines for Orphan Works”.

Read the rest of this entry »

In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Earlier this year, Google revealed that it routinely removes search results that link to material allegedly infringing copyrights, thereby following removal requests of copyright holders (see  “New Layer of Copyright Enforcement: Search“). Since this announcement, the number of removed search results per month has quadrupeld (see Figure below).

Yesterday, Google announced that in addition to removing search results it is going to also adapt its ranking algorithm:

Starting next week, we will begin taking into account a new signal in our rankings: the number of valid copyright removal notices we receive for any given site. Sites with high numbers of removal notices may appear lower in our results.

As in discussed in the first entry of this series on algorithm regulation, the technological layer of regulation is becoming increasingly important for copyright enforcement. But Google’s move to tinker with its most precious asset, the search algorithm, also evidences that technological regulation of this kind may directly result from stakeholder negotiations.

Read the rest of this entry »

Last week the European Parliament rejected the Anti-Counterfeiting Trade Agreement (ACTA, see also “ACTA as a Case of Strategic Ambiguity“) with 478 voting against the treaty, 39 in favour and 165 MEPs abstaining. Commenting on this outcome, Joe McNamee from the ACTA-critical NGO European Digital Rights (EDRi) stated that “ACTA is not the end. ACTA is the beginning.” In his optimistic account, the rejection of ACTA has substantially changed the debate on intellectual property rights regulation in Europe:

Thanks to SOPA, European citizens better understood the dangers of ACTA. Thanks to the anti-ACTA campaign, it would be politically crazy for the Commission to launch the criminal sanctions Directive. Thanks to ACTA, there is broad understanding in the European Parliament of just how bad IPRED really is and any review now, if the Commission has the courage to re-open it, is more likely to improve the Directive rather than increase its repressive measures.

However, a recent op-ed by Canadian copyright scholar Michael Geist, illustrates why ACTA’s contents might not be so dead after all. Referring to leaked documents of negations between Canada and the EU Commission on the “Comprehensive Economic and Trade Agreement” (CETA):

According to the leaked document, dated February 2012, Canada and the EU have already agreed to incorporate many of the ACTA enforcement provisions into CETA, including the rules on general obligations on enforcement, preserving evidence, damages, injunctions, and border measure rules. One of these provisions even specifically references ACTA.

Read the rest of this entry »

In the series “algorithm regulation“, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm. 

Google’s recent move to advertise its practice of removing search results that link to material that allegedly infringes copyrights (see “New Layer of Copyright Enforcement: Search“) demonstrates the importance of a web service’s back-end for issues such as free speech or (actual) enforcement levels in certain fields of regulation such as copyright. In his contribution to the “Social Media Reader” (2012, edited by Michael Mandiberg), Felix Stalder puts this insight into a broader context when reflecting on “the front and the back of the social web“. He criticizes the “overly utopian” picture of the new digital possibilites drawn by scholars such as Clay Shirky, author of “Here Comes Everybody“, which he attributes to “focusing primarily on the front-end” of web technologies:

The social web enables astonishingly effective, yet very lightly organized cooperative efforts on scales previously unimaginable. However, this is only half of the story, which plays out on the front end. We cannot understand it if we do not take the other half into account, which play out on the back-end. New institutional arrangements make these ad-hoc efforts possible in the first place. There is a shift in the location of the organizational intelligence, away from the individual organization towards the provider of the infrastructure. It is precisely because so much organizational capacity resides now in the infrastructure that individual projects do not need to (re)produce it and thus appear to be lightly organized. If we take the creation of voluntary communities and the provision of new infrastructures as the twin dimensions of the social web, we can see that the phenomenon as a whole is characterized by two contradictory dynamics. One is decentralized, ad-hoc, cheap, easy-to-use, community-oriented, and transparent. The other is centralized, based on long-term planning, very expensive, difficult-to-run, corporate, and opaque. If the personal blog symbolizes one side, the data-center represents the other.

Read the rest of this entry »

Recently Google announced an extension to its “Transparency Report“, which now also includes a section on requests to remove search results that link to material that allegedly infringes copyrights. Last month, Google processed 1,294,762 copyright removal requests by 1,109 reporting organizations, representing 1,325 copyright owners. The Figure below illustrates how the number of requests has increased between July 2011 to mid May 2012.

The growing number of removal requests points to the relevance of search technology as a means for copyright enforcement. Since for many Internet users what is not found by Google appears to be non-existent, removing search results from Google’s results lists is obviously a powerful tool for private copyright enforcement. However, several downsides are connected with such private copyright enforcement practices:

Read the rest of this entry »

In European regulatory discourse as well as copyright research, there is a debate whether the US Fair Use model is better suited to deal with innovation in general and digital challenges in particular than the European system of exceptions. It makes sens to discuss the state of the art of research on Fair Use in the US and what we can learn in Europe.

In the course of a visit in Europe, Pamela Samuelson from UC Berkeley Law School & School of Information gave an interesting talk about “Fair Use in Europe? Lessons from the US and Open Questions”. Her main message could be summarized in two points: First, flexible regulation such as the US Fair Use clause is better suited to rapid technological changes than the comparably static system of exceptions and limitations in European copyright. To illustrate this point, Samuelson mentioned several innovations such as scholarly data-mining in Google Book Search (Ngram Viewer)* or Brewster Kale’s “Wayback Machine” that would have been much more difficult to realize without the Fair Use exemption.

Second, Samuelson explicitly did not recommend to get rid of or avoid specific exceptions all together; rather, keeping limitations and exceptions that provide legal certainty would be desirable even when introducing some form of fair-use-like clause into the European copyright system.

Read the rest of this entry »

The Book

Governance across borders: transnational fields and transversal themes. Leonhard Dobusch, Philip Mader and Sigrid Quack (eds.), 2013, epubli publishers.
October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Twitter Updates

Copyright Information

Creative Commons License
All texts on governance across borders are licensed under a Creative Commons Attribution-Share Alike 3.0 Germany License.
Follow

Get every new post delivered to your Inbox.

Join 196 other followers