You are currently browsing the category archive for the ‘Copyright Regulation’ category.
This week the EU Commission published its report (PDF) on responses to the public consultation on EU copyright held earlier this year. The consultation had drawn a comparably high number of responses with a total of about 11,000 messages, not least due to initiatives such as fixcopyright.eu (targeting end users) and creatorsforeurope.eu (targeting authors and performers). While over at IPKat copyright buffs are already delving into the details of the report, I tried to have a look at the bigger picture here: what do we learn about the state of copyright at large? And what overall direction should copyright reform take? With regard to both questions the report is quite instructive because of its clear and straightforward structure.
The report is structured along the 80 questions of the consultation, which are distributed across 24 issue sections. Within each of these issue sections, the report distinguishes between the different stakeholder groups that took part in the consultation (see chart below).
The Council of Europe has invited me to contribute the following input paper (PDF) on “Need for New Regulation to Enhance Creativity in the Digital Age: The Cases of User-generated Content and Cultural Heritage Institutions” for the Conference on “Creating an enabling environment for digital culture and for empowering citizens”, taking place 4-5 July 2014, Baku, Azerbaijan.
In the course of growing economic importance of knowledge and of technological change related with the Internet and digitization, regulations of knowledge and information goods have increasingly become an issue of transnational contestation. Particularly the role of copyright law has changed since virtually all forms of online communication and interaction requires copying and distributing content, thereby becoming copyright-related. In a way, copyright laws have become the core regulatory device for the digital information society in general and digital creative practices in particular.
At the same time, we can observe that regulatory struggles in the copyright realm date way back. Already Kant (1785) and Fichte (1793) distinguished between different functional groups affected by copyright laws, among which publisher/copyright owner, author/creator, and consumer/user represent the most important. These groups are still the ones most affected by copyright regulation, even though today copyright also covers cinematographic work and computer programs and it is possible to reproduce nearly all types of work in digital form. Balancing the interests of these groups is therefore still the main task for copyright regulators on the international and the national level.
And while technological change has always provided both opportunities for new forms of creativity and problems for pre-existing business models in the copyright realm (Wu 2010), the all-encompassing and highly dynamic impact of new digital technology on nearly all fields and types of creative activities brings with it enormous regulatory challenges. First, digitization makes it possible to distinguish between content and medium – a constellation that is of major importance for the copyrighted content industry since it sells CDs, DVDs, and books, not music, movies, or novels. From a regulation perspective, this means that new rules – be they publicly legislated or privately enforced via license agreements – tend to more directly address particular usage practices, affecting traditional knowledge brokers such as archives, libraries or museums. Second, loss- and lag-free copying of digital contents via personal computers and the Internet enable new forms of private copying and peer-to-peer distribution of content on a massive scale. The regulation challenge related to this issue is to allow for these new technologies to enfold while at the same time prevent a massive increase in copyright infringement due to piracy. Third, thanks to decreasing production and distribution costs, many more people actively engage in content creation and make their works accessible directly to the public (sometimes referred to as ‘user-generated content’), thereby often re-using and transforming pre-existing copyrighted works. How to regulate these new forms of derivative creativity and creative consumption is again a task for regulators to address. Read the rest of this entry »
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
With a market share of over 90 percent in Europe, the Google search engine and its search algorithm respectively decide what is relevant on an issue and what not. Any information that is not placed on the first few pages of Google’s search results will hardly ever be found. On the other hand, personal information that is listed prominently in these results may haunt you forever. The latter issue was recently tried by the European Court of Justice (ECJ), who ruled (C-131/12) that
the activity of a search engine consisting in finding information published or placed on the internet by third parties, indexing it automatically, storing it temporarily and, finally, making it available to internet users according to a particular order of preference must be classified as ‘processing of personal data’
and that, under certain not very clearly spelled out conditions relating to the data subject’s rights to privacy,
the operator of a search engine is obliged to remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person.
By crafting such a “right to be forgotten”, the ECJ effectively regulates Google’s search algorithms. In other words, we can observe the ECJ regulating Google’s algorithmic regulation. In response to the ruling, Google has already set up an online form for deletion requests, stating that Read the rest of this entry »
About half a year ago, the German Internet association D64 – Center for Digital Progress had launched an initiative to promote the use of Creative Commons licenses. I was one of the co-organizers. Last week, with the help of graphic designers Sara Lucena und Nico Roicke, we have put together a very nice infographic on “Creative Commons in Numbers”. Of course, some of the numbers are only estimates and not all are most recent, but taken together they give a good overall impression of Creative Commons usage on the internet. Enjoy & Share! Read the rest of this entry »
Creative Commons licenses are essential to virtually all of the different “open movements”, which have emerged over the past two decades beyond open source software. In the realm of open education, open science and open access, Creative Commons licenses are the standard way to make content open to the wider public. Also in fields such as open data and open government Creative Commons licenses are widely used to make it easier for third parties to re-use publicly funded content.
In spite of this vital role in different fields of openness, not to speak of all the open Wikimedia projects, Creative Commons has long struggled with its role. During its first decade, Creative Commons nearly exclusively focused on its role as a license steward, carfully abstaining from political copyright activism typical for the open movements. Only very recently, following a speech by its founder Lawrence Lessig at the CC Global Summit 2013, Creative Commons has issued a policy statement on “Creative Commons and Copyright Reform” saying that “the CC vision — universal access to research and education and full participation in culture — will not be realized through licensing alone.”
It is well known that YouTube serves as a platform for a huge variety of educational material. Most prominently, Salman Kahn (“Khan Academy“) began his career as a provider of Massive Open Online Courses (MOOC) by posting teaching videos on YouTube.
In addition to educational material on all kinds of topics provided by third parties, Google increasingly engages in the production of its own educational content to improve the quality of user-generated content published on its platform. Google’s obvious calculation: better videos means more views means more ad revenue.
Initially, however, Google’s first educational videos were in mere self-defense against countervailing accusations with relation to copyright infringement on its platform. While rights holders complained and blocked unauthorised use of their content, users protested against respective deletion of their accounts (see “Private Negotiation of Public Goods: Collateral Damage(s)“) . In this situation, Google launched its “YouTube Copyright School”, which so-called “multiple infringers” have to watch to re-open their account (see “Crazy Copyright Cartoon: The YouTube Copyright School“).
After reaching between 10 and 13 percent in German national polls in mid-2012 (see “Around the German Pirate Party Convention 2012“), the actual election results of 2.2 percent for the Pirate Party – only 0.2 percent more than in 2009 – smashed all hopes of entering the German Bundestag. The plenty of explanations for the party’s demise, which was as quickly as its rise, include the following:
- Public internal conflicts: as is often the case in new parties, initial success attracts a lot of different constituencies, all bringing in their own and often conflicting ideas and opinions. In finalizing positions, this diversity naturally leads to conflicts with some of the members leaving the party again. However, in the case of the Pirate Party, the self-imposed radical transparency put all of these conflicts out in the public for anyone to see – in all its nastiness.
- Change in media narratives: in the beginning, the media framed awkward statements or lack of political positions as “interesting”, “fresh” or “authentic” (see, for example, an article in the quality daily Sueddeutsche in November 2011). As some prominent members such as Marina Weisband stepped down and the party began to drop in the polls, this narrative turned 180 degrees. Authentic and honest admittance of nescience suddenly became incompetent ignorance. As was the case in overly positive reporting before, narrative and change in polls fed on each other.
- New protest party Alternative for Germany (AfD): part of the explantion of the Pirate Party’s success was their ability to collect protest votes (see also “German Pirates’ Winning Streak: More than Protest“). In this regard, the newly founded and Euro-critical AfD did a much better job this Sunday and nearly reached the five percent election threshold.
- Failure to deliver on promise of ‘liquid democracy‘: in addition to calls for copyright reform and government transparency, one of the core promises of the Pirate Party in Germany was to improve democratic participation with the help of new technological means. However, the party could not agree to implement a “permanent general assembly” with the help of its voting and discussion tool “liquid feedback“, thereby substantially undermining the credibility of calls for implementing similar tools elsewhere.
- Missed opportunity of the NSA scandal: even though the leaks by Edward Snowden directly addressed core issues of the Pirate Party movement such as privacy and anti-surveillance, the German Pirates were not able to capitalize on it. Different to the anti-ACTA protests (see “ACTA as a Case of Strategic Ambiguity“), where a clear goal (‘Stop ratification of ACTA!’) and a clear addressee (the European Parliament) helped to mobilize, the Pirate Party did not manage to identify an enemy or suggest measures.
Dave Itzkoff hit the nail on its head with the following opener to his New York Times article on the heirs to Sherlock Holmes in 2010:
“For a 123-year-old detective, Sherlock Holmes is a surprisingly reliable earner.”
In a more recent guest post at the 1709 blog, Miri Frankel reports about a new legal battle with regard to the copyright expiration date of some works of Arthur Conan Doyle, the creator of the fictional character Sherlock Holmes:
In February Leslie Klinger, a Los Angeles attorney, filed a lawsuit against the estate of Sir Arthur Conan Doyle — the creator and author of a series of fictional works featuring legendary investigator and crime-solver Sherlock Holmes. Mr Klinger is the author of numerous books and articles relating to the “Canon of Sherlock Holmes” […] For years, the Conan Doyle Estate has demanded and collected licensing fees from authors who created works drawing from or based on the Sherlock Holmes character or other elements from the world of Sherlock Holmes. […] But Mr Klinger’s view, and the view of other, sympathetic authors who have created new stories based on elements from the public domain works of Sir Conan Doyle, is that these licensing fees are not necessary, and the Conan Doyle Estate should not be allowed to threaten them with lawsuits to extract licensing fees. The Complaint asserts that only new, original elements first published in the stories that remain under copyright protection are still protectable; copyright no longer protects, however, any elements that had already been published in earlier Sherlock Holmes works, so all such elements are now in the public domain.
Interestingly, Klinger makes his arguments not only in court but has also launched a website entitled “Free Sherlock!“, where he is even asking for donations “to offset legal fees and expenses of the litigation.” Read the rest of this entry »
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
Today I stumbled via twitter upon the website “Google Algorithm Change History” that chronologically documents all changes of the core search algorithm publicly announced by Google. The most striking feature of the site is the sheer number of changes:
Each year, Google changes its search algorithm up to 500 – 600 times. While most of these changes are minor, every few months Google rolls out a “major” algorithmic update that affect search results in significant ways.
In other words, it does not make sense any more to speak of “the Google algorithm” because there is not an algorithm but there are algorithm-related practices. In line with the practice turn in contemporary social theory (see Schatzki et al. 2001) and similar to perspectives such as strategy-as-a-practice, we might require a practice perspective on algorithms to better understand how algorithm regulation works.
When looking at the frequent – not to say constant – changes in Google’s search algorithm, it also becomes obvious how misleading regular comparisons with the Coca-Cola formula such as the following in a Wall Street Journal blog are:
Google is very cagey about its search algorithm, which is as key to its success as Coke’s formula is to Coca-Cola.
The algorithm of Google search is not like a static formula and therefore it should not be treated as a trade-secret either. Actually, if the search algorithm where a mere formula, we would see much more competition in search. Google is practicing algorithmic search and it is these continuous changes, which mostly rest on access to unimaginably big data sets of search and usage practices, that are difficult to imitate for competitors.
With regard to the issue of algorithm regulation, a practice perspective sensitizes for phenomena such as regulatory drift. In a paper on transnational copyright regulation, Sigrid Quack and myself describe regulatory drift as “changes in meaning and interpretation, which result from continuous (re-)application of certain legal rules” (see also Ortmann 2010). In the context of algorithms, the term might refer to the sum of continuous revision and (seemingly) minor adaptation practices, which in the end lead to substantial and partly unintended changes in regulatory outcomes.
The interview with Lawrence Lessig featured below was conducted by Markus Beckedahl and John Weitzmann, leaders of the German Creative Commons affiliate organizations in late September and transcribed by Christian Wöhrl. A German version was published yesterday at netzpolitik.org. We are pleased to to publish the English original of the interview and invite others to share it as long as they abide to the terms of the Creative Commons Attribution license.Maybe you’ve answered this question too many times, but why did you found Creative Commons?
Lawrence Lessig: Well, there’s a narrow reason which was that at the time we were litigating the Eldred vs. Ashcroft case, and Eric Eldred was skeptical about whether we could win that case. And he said that he wanted to make sure that out of that litigation wouldn’t just come a losing case at the Supreme Court but something that would be a more fundamental foundation to support what we’ve come to call Free Culture. So I began to think that was right and recognized, more importantly, that if we’re ever going to get real change that we would had to build the movement of understanding in people. That wasn’t going to come from the top down, it had to come from the bottom up. So a number of us began to talk about what was the way to craft such a movement and the idea of giving people a simple way to affirm that they don’t believe in either extreme of perfect control or no rights, and what’s the best way to do it. So that’s what launched Creative Commons.
So there were already several Open Content licenses. Why did you develop your own CC licenses and didn’t just support existing FSF licenses, for example?
Lawrence Lessig: Well, there were two reasons. First, we thought we needed to have a more flexible and wider range of licenses. So that the, you know, like, the Free Document License is a particular version of a free license that might not be appropriate for all kinds of material – number one. But number two, we thought it was really important to understand your own licenses; it was very important to begin to embed an architecture that could be, number one, human-readable, understandable, and, number two, machine-readable, and, number three, at the very bottom, legally enforceable. And none of the other licensing structures that were out there were thinking of this particular mode of policy making, to have to speak three languages at the same time. So that’s what led us to architect this initially.
And it was our commitment from the very beginning, and, you know, we achieved this with the Free Document License and we’re still talking about this with the Free Art License to enable interoperability or portability between free licenses. So our idea was eventually that it didn’t matter which of the free licenses you were in as long as you could move into the equivalent free license that would be CC compatible.