You are currently browsing leonidobusch’s articles.
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
A common complaint of Google’s competitors in fields such as Internet maps is that Google’s search algorithm favors its own services over those of competitors in its search results. For instance, the FairSearch coalition led by Microsoft, Oracle and others calls for more transparency in displaying search results and harshly criticizes Google:
Based on growing evidence that Google is abusing its search monopoly to thwart competition, we believe policymakers must act now to protect competition, transparency and innovation in online search.
Given Google’s market dominance in Europe with over 90 percent in core markets such as Germany, such allegedly discriminatory practices led to an antitrust investigation by the European Commission (EC). However, providing reproducable evidence for such discriminatory search results is difficult. Google is not only constantly changing its search algorithm (see “Algorithm Regulation #4: Algorithm as a Practice“) but also increasingly personalizing search results; both these characteristics of contemporary search algorithms make it difficult to compare search results over time.
Dave Itzkoff hit the nail on its head with the following opener to his New York Times article on the heirs to Sherlock Holmes in 2010:
“For a 123-year-old detective, Sherlock Holmes is a surprisingly reliable earner.”
In a more recent guest post at the 1709 blog, Miri Frankel reports about a new legal battle with regard to the copyright expiration date of some works of Arthur Conan Doyle, the creator of the fictional character Sherlock Holmes:
In February Leslie Klinger, a Los Angeles attorney, filed a lawsuit against the estate of Sir Arthur Conan Doyle — the creator and author of a series of fictional works featuring legendary investigator and crime-solver Sherlock Holmes. Mr Klinger is the author of numerous books and articles relating to the “Canon of Sherlock Holmes” […] For years, the Conan Doyle Estate has demanded and collected licensing fees from authors who created works drawing from or based on the Sherlock Holmes character or other elements from the world of Sherlock Holmes. […] But Mr Klinger’s view, and the view of other, sympathetic authors who have created new stories based on elements from the public domain works of Sir Conan Doyle, is that these licensing fees are not necessary, and the Conan Doyle Estate should not be allowed to threaten them with lawsuits to extract licensing fees. The Complaint asserts that only new, original elements first published in the stories that remain under copyright protection are still protectable; copyright no longer protects, however, any elements that had already been published in earlier Sherlock Holmes works, so all such elements are now in the public domain.
Interestingly, Klinger makes his arguments not only in court but has also launched a website entitled “Free Sherlock!“, where he is even asking for donations “to offset legal fees and expenses of the litigation.” Read the rest of this entry »
Inspired also by the series on algorithm regulation on this blog, I am currently working together with Claudia Müller-Birn on the issue of algorithmic governance in the case of Wikipedia. In the course of this research project, I stumbled upon the case of flagged revisions/sighted versions, which very nicely illustrates the concept of algorithmic governance.
With Wikipedia Germany taking the lead in 2006, some Wikipedia language versions introduced sighted versions of articles as a measure to secure against vandalism and improve article credibility. The concept is described in the English language Wikipedia as
a system whereby users who are not logged in may be presented with a different version of an article than users who are. Articles are validated that they are presentable and free from vandalism. The approved versions are known as Sighted versions. All logged-in users will continue to see and edit the most recent version of a page. Users who are not logged in will initially see the most recent sighted version, if there is one. If no version is sighted, they see the most recent one, as happens now. Users looking at a sighted version can still choose to view the most recent version and edit it.
Since its introduction in the German Wikipedia, the concept has evolved in a complex set of rules determining how Wikipedia edits are sighted. The core idea is that registered Wikipedia editors automatically receive the status as a passive or active “reviewer” depending on their respective editing history. Edits of a user with the status of a passive reviewer are automatically considered to be sighted; active reviewers have additional rights such as actively marking versions as sighted or removing the respective status.
In the series “algorithm regulation”, we discuss the implications of the growing importance of technological algorithms as a means of regulation in the digital realm.
Today I stumbled via twitter upon the website “Google Algorithm Change History” that chronologically documents all changes of the core search algorithm publicly announced by Google. The most striking feature of the site is the sheer number of changes:
Each year, Google changes its search algorithm up to 500 – 600 times. While most of these changes are minor, every few months Google rolls out a “major” algorithmic update that affect search results in significant ways.
In other words, it does not make sense any more to speak of “the Google algorithm” because there is not an algorithm but there are algorithm-related practices. In line with the practice turn in contemporary social theory (see Schatzki et al. 2001) and similar to perspectives such as strategy-as-a-practice, we might require a practice perspective on algorithms to better understand how algorithm regulation works.
When looking at the frequent – not to say constant – changes in Google’s search algorithm, it also becomes obvious how misleading regular comparisons with the Coca-Cola formula such as the following in a Wall Street Journal blog are:
Google is very cagey about its search algorithm, which is as key to its success as Coke’s formula is to Coca-Cola.
The algorithm of Google search is not like a static formula and therefore it should not be treated as a trade-secret either. Actually, if the search algorithm where a mere formula, we would see much more competition in search. Google is practicing algorithmic search and it is these continuous changes, which mostly rest on access to unimaginably big data sets of search and usage practices, that are difficult to imitate for competitors.
With regard to the issue of algorithm regulation, a practice perspective sensitizes for phenomena such as regulatory drift. In a paper on transnational copyright regulation, Sigrid Quack and myself describe regulatory drift as “changes in meaning and interpretation, which result from continuous (re-)application of certain legal rules” (see also Ortmann 2010). In the context of algorithms, the term might refer to the sum of continuous revision and (seemingly) minor adaptation practices, which in the end lead to substantial and partly unintended changes in regulatory outcomes.
(leonhard)
The interview with Lawrence Lessig featured below was conducted by Markus Beckedahl and John Weitzmann, leaders of the German Creative Commons affiliate organizations in late September and transcribed by Christian Wöhrl. A German version was published yesterday at netzpolitik.org. We are pleased to to publish the English original of the interview and invite others to share it as long as they abide to the terms of the Creative Commons Attribution license.
Maybe you’ve answered this question too many times, but why did you found Creative Commons?Lawrence Lessig: Well, there’s a narrow reason which was that at the time we were litigating the Eldred vs. Ashcroft case, and Eric Eldred was skeptical about whether we could win that case. And he said that he wanted to make sure that out of that litigation wouldn’t just come a losing case at the Supreme Court but something that would be a more fundamental foundation to support what we’ve come to call Free Culture. So I began to think that was right and recognized, more importantly, that if we’re ever going to get real change that we would had to build the movement of understanding in people. That wasn’t going to come from the top down, it had to come from the bottom up. So a number of us began to talk about what was the way to craft such a movement and the idea of giving people a simple way to affirm that they don’t believe in either extreme of perfect control or no rights, and what’s the best way to do it. So that’s what launched Creative Commons.
So there were already several Open Content licenses. Why did you develop your own CC licenses and didn’t just support existing FSF licenses, for example?
Lawrence Lessig: Well, there were two reasons. First, we thought we needed to have a more flexible and wider range of licenses. So that the, you know, like, the Free Document License is a particular version of a free license that might not be appropriate for all kinds of material – number one. But number two, we thought it was really important to understand your own licenses; it was very important to begin to embed an architecture that could be, number one, human-readable, understandable, and, number two, machine-readable, and, number three, at the very bottom, legally enforceable. And none of the other licensing structures that were out there were thinking of this particular mode of policy making, to have to speak three languages at the same time. So that’s what led us to architect this initially.
And it was our commitment from the very beginning, and, you know, we achieved this with the Free Document License and we’re still talking about this with the Free Art License to enable interoperability or portability between free licenses. So our idea was eventually that it didn’t matter which of the free licenses you were in as long as you could move into the equivalent free license that would be CC compatible.
Creative Commons’ birthday party week is hardly over and the organization responsible for the most common open content licenses is back to its core business: acting as a license steward. As reported repeatedly on this blog (e.g. “Discussing the NC Module“), the NonCommercial (NC) license module has attracted a substantial amount of criticism over the years. The suggestions with regard to the NC module range from fundamental such as to get rid of the module entirely (e.g. by the Students for Free Cutlture) to moderate such as clarify the meaning of the NC clause.
In his most recent statement on the issue, Creative Commons’ Timothy Vollmer indicates that in addition to some uncontroversial suggestions changing the name of “NonCommercial” to “Commercial Rights Reserved” is on the table:
This last point warrants a specific mention here, as it would be a big (and potentially sensitive) change to the branding of the Creative Commons NonCommercial licenses. This proposal is for a simple renaming of the “NonCommercial” license element to “Commercial Rights Reserved,” without any change in the definition of what it covers. Renaming it to something that more accurately reflects the operation of the license may ensure that it is not unintentionally used by licensors who intend something different. For more information about the idea and rationale behind this proposal, please see the CC wiki page on the topic.
At the heart of culture lies creative recursion: re-applying creative practices to artifacts resulting from previous creative practices. Remix culture could then be defined as processes of creative recursion that make this recursion as such recognizably visible. This is what makes a remix reflexive, as is explained by Eduardo Navas over at remixtheory.net:
[remix] allegorizes and extends the aesthetic of sampling, where the remixed version challenges the aura of the original and claims autonomy even when it carries the name of the original; material is added or deleted, but the original tracks are largely left intact to be recognizable.
As a result, works of remix communicate always and simultaneously on at least two levels: the asthetics of the remix as a new work and its status as a remix, referencing the remixed works. A nice example of the communicative power of remixing as recognizable creative recursion is provided by the most recent election campaign of the Pirate Party of Lower Saxony in Germany. To communicate ‘piracy’ as a brand, the pirate party creatively ‘pirated’ prominent brands. Find below several of the respective campaign posters, all of which can be found on the campaign portal ideenkopierer.de (“idea copiers”; some of the translations are taken from Torrentfreak):
We may not have Alps in Lower Saxony, but we want to ensure that students continue to know that cows are not purple. Read the rest of this entry »





