My Musings About Intellectual Property Law

Just a non-technical background IP nerd, trying to make a name in the world.

Artificial Intelligence, Machine Ethics, and Regulatory Policy

An Introduction to the World of Artificial Intelligence

In today’s world of Internet of Things connected devices, which usually use artificial intelligence (“AI”) technology, it has become increasingly important to think about the ways in which AI should be regulated in the years to come. I’ve been thinking about AI and its implications (in terms of user privacy, ethics, implicit biases, and regulatory policy) ever since I bought a Siri integrated iPhone. While I’m going to do another follow-up post on the implications of AI on user privacy, this post is solely going to discuss AI and regulatory policy. In order to do that, I will delve a little bit into robotic/machine ethics from a political philosophy perspective as background.

There are three main stages of Artificial Intelligence. The first stage is called artificial narrow intelligence (ANI), where the AI is specialized and task oriented – for example being good at chess. We crossed this threshold on February 10, 1996 when IBM’s Deep Blue won its first game against world champion, Garry Kasparov.[1] We are currently living in a world that runs on ANI. There is Apple’s Siri, Amazon’s Alexa, Netflix’s algorithm that recommends shows you might like based on your previous viewing history, e-mail spam filters, and Google translate among many other examples that we use on a regular basis. There are also sophisticated ANI systems such as IBM’s Watson. The second stage is called artificial general intelligence (AGI), where the AI can do anything a human can. We are quickly approaching this stage with Google’s DeepMind’s AlphaGo. AlphaGo uses reinforced learning and neural networks to mimic the learning process of a human brain. AlphaGo defeated Chinese Go Master, Ke Jie in May 2017.[2] Of course, this stage raises the age-old concerns of consciousness. But before I get into consciousness, let me quickly mention the last stage. The third stage is called artificial super intelligence (ASI), where AI ranges from just a little bit smarter than humans to infinitely smarter and thus unknowable for us. I mention this now because if AGI raises concerns about human consciousness, ASI obviously does even more so. For the purposes of this thought piece, I’m going to talk about consciousness and robotic ethics in the context of both AGI and ASI since ASI is hypothesized as a progression from the AGI stage.

Let’s talk about Consciousness

When is it that humans attain consciousness? Is it in the womb or soon after being born? And will it work the same way for AGI and ASI? According to this Scientific American article,

“consciousness with its highly elaborate content, begins to be in place between the 24th and 28th week of gestation. Roughly two months later synchrony of the electroencephalographic (EEG) rhythm across both cortical hemispheres signals the onset of global neuronal integration. Thus, many of the circuit elements necessary for consciousness are in place by the third trimester…The dramatic events attending delivery by natural (vaginal) means cause the brain to abruptly wake up, however. The fetus is forced from its paradisic existence in the protected, aqueous and warm womb into a hostile, aerial and cold world that assaults its senses with utterly foreign sounds, smells and sights, a highly stressful event.”

Because there is no “gestation period” for AI, consciousness will not arise in the same way that it does for humans even if AGI and ASI use neural networks like the human brain does.

I can only begin to hypothesize when consciousness arises for AI. Part of it depends on what one believes constitutes consciousness and philosophers have long posited its meaning. For example, Descartes defined thought in the following way,

Thought. I use this term to include everything that is within us in such a way that we are immediately aware [conscii] of it. Thus all the operations of the will, the intellect, the imagination and the senses are thoughts. I say ‘immediately’ so as to exclude the consequences of thoughts; a voluntary movement, for example, originates in a thought.”[3]

After Descartes, three main tenets of the term conscience emerged – consciousness makes thought transparent to the mind, consciousness involves reflection, and conscious thought is intentional.[4]

And, John Locke‘s definition of “person” contained within it the definition of consciousness:

[A person] is a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the same thinking thing in different times and places; which it does only by that consciousness, which is inseparable from thinking, and as it seems to me essential to it: It being impossible for any one to perceive, without perceiving, that he does perceive. (Essay 2.27.9)[5]

Will AI ever attain Consciousness? 

Based on the thinking of the great philosophers then, my thinking is that consciousness probably will begin sometime or soon after AI has been programmed to develop a deep-learning system and is able to process large amounts of information, store it, and use it at a future date and it only improves as it constantly processes, stores, and uses the information for future tasks and improves its neural networks. The use of the information stored by AI would constitute the reflection and intentional bit described above. It would also be akin to Locke’s belief that a “person” has reason and reflection. Under Locke’s theory, the person also additionally constitutes the same thinking thing in different times and places, but with AI, a complication arises because the same AI (With thought) that was programmed by a specific person can be used in different devices. But once on different devices, the AI learns and processes information from its user in different ways and can essentially communicate with HQ about what it learned thereby creating AI that is akin to both the same thinking thing in different times and places (for example, my Amazon Alexa device) and a different thinking thing in different times and places (for example, your Amazon Alexa device). So even though, to begin with Amazon Alexa is programmed in the same way, it will learn different things depending on who uses it and will hypothetically be able to communicate with its counterparts once AI attains the AGI stage. And, this is similar to what happens in the movie Her.

Let’s talk about Machine Ethics

The reason this discussion is important is because consciousness is implicitly intertwined with “personhood” as explained by Locke. And with consciousness and “personhood,” societal morality comes into play which in turn involves having certain rights and abiding by ethics, which in turn involves the rule of law. So, the next question involves thinking about how AI should be regulated once it achieves consciousness and therefore some version of “personhood” which applies to AI. Machine ethics is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally. Isaac Asimov considered this issues back in the 1950s in his book, I, Robot. He famously proposed the Three Laws of Robotics to govern AI. There have been others who have posited this same question. There is Turing’s Polite Conversation, which involves The Turing test, developed by Alan Turing in 1950 (same year as I, Robot was published!), is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. There is also Searle’s strong AI hypothesis among others, which proposes,

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

How Should We Regulate AI?

I think an important first step in thinking about regulation would involve thinking about which stage the AI is in and going from there. The policies and regulations that will need to be put in place will be very different for all three stages.

ANI and its Privacy Concerns

For the ANI stage, which constitutes the aforementioned Alexa’s AI, Apple’s Siri, and Netflix’s algorithms, among others, what needs to be regulated is the specific corporation’s use of its consumer’s metadata, which should be encrypted along with the corporation following the principles of transparency and accountability in the case of misuse of this data. However, I would like to provide further details about my conception for these concepts and I will do so in a separate post which will talk about IoT connected devices, AI, and privacy in depth.

Consciousness, AGI, and Regulation

For now, I want to focus on and postulate the ways in which we can regulate AGI. In In order to do this, let’s envision a specific scenario – one in which AGI robots work among us as our colleagues in the near future. Besides, the machine ethics I talked about earlier, Oxford University Professor Nayef Al-Rodhan has mentioned the case of neuromorphic (“brainlike”) chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. So, AGI Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human ‘weaknesses’ as well: selfishness, a pro-survival attitude, hesitation etc. I mention this because in a large part, the AI’s decision making will depend on the person who programmed it, and with that comes the risk of the AI having the same implicit biases as the programmer.

AI and Implicit Bias

There has been a lot of talk recently about algorithmic bias being already pervasive in several industries with no one making an effort to correct it. For example, police departments across the U.S. are implementing “predictive-policing” crime prevention efforts. More specifically, in cities including Los Angeles and New York, software analyses of large sets of historical crime data are being used to forecast where crime hot spots are most likely to occur and the police are then directed to those areas. This may sound like something right out of a movie (like Steven Spielberg’s 2002 Minority Report) but problems abound with the real systems as they currently stand deployed in these cities. Currently, the software risks getting mired in a vicious cycle where the police increase their presence in the same places they are already policing, thus ensuring more arrests come from the same areas. In the United States, this has usually meant more surveillance in traditionally poorer, nonwhite neighborhoods. This means that the algorithm is likely going to store past accounts of where these arrests are being made, which in turn will reinforces the AI’s belief in higher than the normal level of crime taking place in the same neighborhoods. And, if this discriminatory algorithm goes unchecked and unaltered, not only will a system like this raise Fourth Amendment concerns, but it will also produce abundant constitutional law concerns.

Let’s Be Proactive when it comes to AGI! What can we learn from already existing law?

I am a big believer in putting proactive rather than reactive policies in place when it comes to AI. Given the previous discussion then, a big part in regulating ANI and AGI would involve making sure that the people who are programming the AI do not impart their implicit biases to the AI. Just like the privacy law space advocates for privacy be design, the AI law space should advocate for eliminating implicit biases by design. Obviously this is easier said than done since everybody has implicit biases. But, what will prove to be key is to make sure a diverse team of individuals (vs. just one individual) is programming these AI algorithms. Another way would be to catch whatever implicit biases leak through to the AI by asking a diverse team of engineers to revisit the AI’s learnings on a regular basis to correct any implicit biases found in its algorithms.

I think we can learn even more from how consumer privacy is regulated by the Federal Trade Commission (FTC) when it comes to regulating AI. The FTC employs “enforcement actions” when it comes to companies keeping their privacy promises to consumers. I think a federal regulatory body akin to the FTC should be put in place for the sole purpose of regulating the different corporations’ AI’s if and when they falter, including  ensuring that companies keep their AI algorithms bias free.

As for the AGI stage, because robots in the AGI stage would be similar to, if not the same as, humans, it would be easier to hold the AI accountable for its actions since presumably it would be employing neural networks like a human brain in its decision making and will be capable of performing actions like humans can. Therefore, a good starting point for developing regulatory policy for AGI then would be to look at what the famous philosophers like Descrates, Plato, Thomas Hobbes, and John Locke among others had to say about human thought, consciousness, and societal and political norms. Even more relevant would be looking at the more recent international laws, treaties, and documents that concern human rights in existence. The International Bill of Human Rights consists of the Universal Declaration of Human Rights (UDHR). The International Covenant on Economic, Social and Cultural Rights is another good resource. And, the International Covenant on Civil and Political Rights is yet another document that may be useful to dig through for policy and lawmakers when formulating these laws in the future. On a fundamental level, it is my belief that if the AGI robots will reach the same level of intelligence as humans, there is no reason for the law not to treat them akin to humans when it comes to attributing rights as well as criminal and civil penalties. So, when this happens, the law should be adopted to include AGI robots in our statutes and regulations, including the judge made common law in the future.

What the heck do we do about ASI?

Finally, ASI is a stage that a lot of public personalities have already expressed fears about. For example, Elon Musk is famous for his views on how AI could potentially doom the human civilization.[6] Stephen Hawking has expressed similar views.[7] However, the good news is just like nobody knows whether Blockchain is a bubble or here to say, no one can accurately predict whether ASI will overtake and destroy humanity. There are obviously inclinations that people lean toward, but the very beauty of human intelligence is that the collective intelligence of our brightest and best minds will not let AI get to the level where it overtakes us and uses us as juice for their needs, The Matrix or Oblivion (Hollywood movie references) style. Luckily, Elon Musk’s own non-profit research company, Open AI, is working to build safe AGI (before we even get to the ASI stage) to ensure that AGI’s benefits are as widely and evenly distributed as possible.[8] Similarly, companies like Google’s Deepmind are also establishing organizations such as the Partnership on AI, whose goals include investing more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems.[9]

 

[1] Deep Blue (Chess Computer), https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer).

[2] Paul Mozur, Google’s AlpaGo Defeats Chinese Go Master in Win for A.I., N.Y. Times (May 23, 2017), https://www.nytimes.com/2017/05/23/business/google-deepmind-alphago-go-champion-defeat.html.

[3] Stanford Encyclopedia of Philosophy, Seventeenth-Century Theories of Consciousness (July 29, 2010; Revised Sept. 27, 2014), https://plato.stanford.edu/entries/consciousness-17th/#2.2.

[4] Id. 

[5] Id. 

[6] Maureen Dowd, Elon Musk’s Billion-Dollar Crusade To Stop the A.I. Apocalypse, Vanity Fair (March 26, 2017), https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x; see also Aatif Sulleyman, AI Is Highly Likely To Destroy Humans, Elon Musk Warns, The Independent (Nov. 24, 2017), http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-openai-neuralink-ai-warning-a8074821.html.

[7] Hannah Osborne, Stephen Hawking AI Warning: Artificial Intelligence Could Destroy Civilization, Newsweek (Nov. 7, 2017), http://www.newsweek.com/stephen-hawking-artificial-intelligence-warning-destroy-civilization-703630; see also Arjun Kharpal, Stephen Hawking says A.I. could be ‘worst Event in the history of our civilization,’ CNBC (Nov. 6, 2017), https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

[8] Open AI, About OpenAI, https://openai.com/about/.

[9] Partnership on AI, Introduction form the Founding Co-Chairshttps://www.partnershiponai.org/introduction/.

Advertisements

India’s Supreme Court decision: Privacy is a Fundamental Right

On August 24, 2017, in a landmark ruling, India’s Supreme Court declared that privacy is a fundamental right under the constitution. At the same time, it also held that privacy is not an absolute right. The decision was a result of a petition which challenged the constitutionality of the “Aadhaar” card, a biometric identity scheme which assigns every Indian citizen with a unique identification. The court specifically noted that its decision was premised on considering the advent of technology and the way an interconnected world affects an individual’s liberty. And so, the court described its goal as parsing the constitution to determine whether it protects privacy as an elemental principle, while being cognizant to the needs, opportunities, and dangers posed to one’s liberty in a digital world.

The Right to Privacy – Case Law in India

India has a long history of cases which has dealt with this question. The Attorney General of India argued that two decisions – M P Sharma v. Satish Chandra, District Magistrate, Delhi (“Sharma”) and Kharak Singh v. State of Uttar Pradesh (“Singh”) – contained observations that the Indian Constitution does not specifically protect the right to privacy. On the other hand, the petitioners argued that these two cases were founded on principles expounded in A K Gopala v. State of Madras (“Gopalan”), which was later held not to be good law in Rustom Cavasji Cooper v. Union of India (“Cooper”). Additionally, the latter party also argued that in Maneka Gandhi v. Union of India (“Maneka”), the minority judgment in Singh was specifically approved of and the decision of the majority overruled.

With this case law as backdrop, a Constitution Bench held that it was essential to determine whether there is a fundamental right of privacy under the Constitution because of the unresolved contradiction in the law. And that is how this decision came to be. In deciding the case, the Supreme Court addressed the aforementioned cases. In Singh, the majority decided that “our Constitution does not expressly declare a right to privacy as a fundamental right, but the said right is an essential ingredient of personal liberty.” In saying so, the court placed reliance on Justice Frankfurter’s words in Wolf v. Colorado, i.e., “The security of one’s privacy against arbitrary intrusion by the police … is basic to a free society…” The Supreme Court in this case noted that while the majority in Singh relied on Justice Frankfurter’s observations regarding the sanctity of home as part of ordered liberty, it declined to recognize a right of privacy as a constitutional protection. On the other hand, the dissenting judge in Singh recognized a protected right to privacy under the constitution, considering it as an ingredient of personal liberty.

In Sharma, the court addressed whether there was a contravention of Article 20(3), which mandates that no person accused of an offense shall be compelled to be a witness against himself. In this case, the Court relied on the ruling laid down by Boyd v. United States, a case from 1886 which held that obtaining incriminating evidence by an illegal search and seizure violated the United States’ Constitution’s Fourth and Fifth Amendments. Accordingly, Sharma originally held that in the absence of a provision like the Fourth Amendment to the US Constitution, a right to privacy cannot be read into the Indian Constitution. The Supreme Court in the present case noted that the Sharma court failed to address whether a constitutional right to privacy is protected by other provisions contained in the fundamental rights including among them, the right of life and personal liberty under Article 21and Article 19 of the Indian Constitution.

In Gopalan, the Court construed the relationship between Articles 19 and 21 to be one of mutual exclusion. Therefore, the seven freedoms of Article 19 were not subsumed in the fabric of life or personal liberty in Article 21. Thus, under Gopalan, free speech and expression were guaranteed by Article 19(1)(a) and hence excluded from personal liberty under Article 21. The dissent in this case however adopted the view that fundamental rights are not isolated but protected under a common thread of liberty and freedom. In Cooper, the theory that fundamental rights are isolated compartments was overruled. The Court in Cooper held instead, “[t]he enunciation of rights either express or by implication does not follow a uniform pattern. But one thread runs through them: they seek to protect the rights of the individual or groups of individuals against infringement of those rights within specific limits. Part III of the Constitution weaves a pattern of guarantees on the texture of basic human rights. The guarantees delimit the protection of those rights in their allotted fields: they do not attempt to enunciate distinct rights.” Later, the overruling of the Gopalan doctrine in Cooper was revisited in Maneka, which held that “[t]here can be no doubt that in view of the decision of this Court in R.C. Cooper v. Union of India…the minority view must be regarded as correct and the majority view must be held to have been overruled.”

The Supreme Court in this case noted that following the decision in Maneka, the established constitutional doctrine is that “personal liberty” mentioned in Article 21 covers a variety of rights, some of which have been raised to the status of distinct fundamental rights and given additional protection under Article 19. Thus, the court declared that the jurisprudential foundation which held the field during Sharma and during Singh has given way to what is now a settled position in constitutional law. First, fundamental rights emanate from the basic notions of liberty and dignity and the enumeration of some facets of liberty as distinctly protected under Article 19 does not take away from Article 21’s expansive ambit. Second, the validity of a law which infringes fundamental rights has to be tested on the basis of its effect on the guarantees of freedom. And third, the requirement of Article 14 that state action must not be arbitrary and must fulfil the requirement of reasonableness imparts meaning to the constitutional guarantees in Part III.

The origins of privacy and how the privacy doctrine evolved in India

After revisiting the case law on privacy, the Supreme Court looked into the origins of privacy, natural and inalienable rights, comparative law on privacy from the UK, US, South Africa, Canada, and the EU, along with a focus on the Indian Constitution, how the privacy doctrine evolved in India, and India’s commitments under International law among other things. After recounting Aristotle’s learnings, William Blackstone’s Commentaries on the Laws of England, John Stuart Mill’s “On Liberty,” James Madison’s Essay on Property, and Samuel D Warren’s and Louis Brandeis’ “The Right to Privacy” Harvard Law Review Article from 1890, the Indian Supreme Court noted that the texts had one thing in common, i.e., the basic need of every individual to live with dignity, be it during urbanization and economic development or during an age where technological change continuously “threaten to place the person into public gaze and portend to submerge the individual into a seamless web of inter-connected lives.” The Court also noted that the idea that individuals can have rights against the State that are prior to rights created by explicit legislation has been developed by Ronald Dworkin in “Taking Rights Seriously.” The Supreme Court then went on to delve into a case called “Gobind,” where the Court held that “[a]ny right to privacy must encompass and protect the personal intimacies of the home, the family, marriage, motherhood, procreation and child rearing.” But the Supreme Court remained unimpressed with this line of thinking because it claimed that the judgment in Gobind did not contain a clear statement of principle by the Court of the existence of an independent right of privacy or of such a right being an emanation from explicit constitutional guarantees. The Supreme Court went on to perform a comprehensive analysis of precedent, spanning several judgments and eventually concluded that the constitutional right to privacy and its limitations have proceeded on a case to case basis, each precedent seeking to build upon and follow the previous formulations. The doctrinal foundation essentially rests upon the trilogy of Sharma –Singh – Gobind upon which subsequent decisions have contributed. Finally, the court concluded that the right to privacy in India had been traced in decisions which were decided in 1954, 1964, and 1975 respectively. More than 40 years have passed since the last decision and technology has led to unprecedented developments in today’s world. And the Supreme Court alluded to as much in its 547-page decision.

The Supreme Court’s Conclusions   

In rendering its conclusion, the Indian Supreme Court revisited the five seminal cases –Sharma, Singh, Cooper, Maneka, and Gopalan. With respect to Sharma, it noted that the observation that privacy is not a right guaranteed by the Indian Constitution in Sharma is not reflective of the correct position and overruled it to the extent to which it indicates the contrary. It also held that Singh incorrectly held that the right to privacy is not a guaranteed right under our Constitution and overruled it to extent which it holds that the right to privacy is not protected under the Indian Constitution. Finally, it held that Singh’s reliance upon the majority decision in Gopalan is not reflective of the correct position in view of the decisions in Cooper and Maneka.

The Supreme Court declared that life and personal liberty are inalienable rights, and although not created by the Constitution, they are recognized by it as inhering in each individual as an intrinsic and inseparable part of the human element which dwells within; that privacy is a constitutionally protected right which primarily emerges from the guarantee of life and personal liberty under Article 21 of the Constitution; that judicial recognition of the existence of a constitutional right of privacy is not an exercise in the nature of amending the Constitution; that privacy is the constitutional core of human dignity; that privacy includes at its core the preservation of personal intimacies, the sanctity of family life, marriage, procreation, the home and sexual orientation and that it also connotes a right to be left alone; that the Supreme Court did not embark upon an exhaustive enumeration or a catalogue of entitlements or interests comprised in the right to privacy; that privacy is not an absolute right, like other rights which form part of the fundamental freedoms protected by Part III, including the right to life and personal liberty under Article 21; that privacy has both positive and negative content, where the former imposes an obligation on the state to take all necessary measures to protect the privacy of the individual and the latter restrains the state from committing an intrusion upon the life and personal liberty of a citizen; that decisions rendered by this Court subsequent to Singh, upholding the right to privacy will be read to the above principle; and that the Union Government needs to examine and put into place a robust regime for data protection to protect against dangers to privacy in the age of information.

 

 

Data Mining: The good, the bad and how to regulate effectively.

We live in an era of “big data”. Everyday all of us provide a large amount of data to a number of private and public organizations, the government and telecommunication companies. Data mining can be defined as “the intelligent search for new knowledge in existing masses of data.”[1] While it is obvious that our privacy is threatened by unregulated surveillance efforts, it would be remiss to say that data mining is all-harmful. Utilizing some of the information provided in metadata fuels innovative research. And recently, data mining has been especially helpful in healthcare. The discussion that follows is divided into three parts. Parts I and II will discuss the privacy concerns that data mining raises and the research benefits it provides for public health purposes respectively. Part III will propose a three step approach that can help us regulate data mining in an effective manner so that we can diminish the privacy concerns and help proliferate the research benefits.

I. The Bad: Data mining and Privacy concerns.

Before I begin my discussion about privacy concerns in an age of data mining, it is extremely prudent to mention that data mining by private or public organizations and data mining by the government are two completely different beasts. This paper is not going to talk about government surveillance; rather it is going to focus on the former.

Most users click “accept” without hesitation on privacy policies. We check into places on Foursquare and Path, we use our debit and credit cards that can be used to trace us back to our home addresses and most social media networking websites like Facebook and Twitter have options such as including your location while transmitting data. However, this is exactly the kind of thing that raises privacy concerns. This explosion of social media gives data companies a much deeper look into one’s personal and social life giving them access to a user’s habits and likes and dislikes.[2] Scholars have explained that “digital dossiers” which hold users’ personal information have been extensively created.[3] Without a way to look at our own consumer profiles it is difficult to say what the companies know and what kind of information they are disbursing to other interested parties.

A deep concern for privacy experts and lawmakers is what might be done with a user’s information once it is collected: identity theft, impersonation and personal embarrassment are only some of the consequences that concern them. But most importantly they worry that people might be written of as “undesirable” when their virtual selves differ from their “actual offline” selves. For example, Acxiom, a giant of consumer database marketing categorizes people into different socioeconomic groups and markets to them accordingly.[4] However, information obtained about a user online might not be fully accurate and this might deprive some people of offers that would otherwise be targeted towards them. Furthermore, though some users might find “discounted” personalized offers marketed to them beneficial, others might see the surveillance required behind making these types of offers as intrusive and as a violation of their privacy.

II. The Good: The Benefits of Data Mining.

While there is a lot of talk about the harmful effects of data mining on a user’s privacy, there is also a bright side to this practice. Recent research has shown that applying data mining techniques can augment the creation of untapped valuable knowledge from large medical datasets. Therefore, the following discussion will focus on the significant benefits that data mining has created in the field of healthcare by providing examples of case studies that have proved successful.

In 2012 researchers from Harvard’s School of Public Health analyzed how human mobility affects malaria infections.[5] In this study, the scientists collected data from approximately 15 million mobile phones over the course of one year.[6] This data was utilized to identify the location of “hot spots” where infected humans were most likely to travel, carrying the disease with them. Based on this information, the team of scientists was able to show that malaria is spread through the movement of infected humans rather than movement of mosquitos.[7] The hot spots helped the scientists in identifying locations with high endemic rates[8] that could most benefit from targeted malaria intervention programs.[9]

In another case study from 2001, a group of researchers set out to analyze the relationship between antipsychotic drugs and myocarditis and cardiomyopathy. The researchers were able to analyze this relationship by using international databases possessed by World Health Organization. The researchers used a data mining approach to test reports of clozapine and other antipsychotic drugs suspected of causing myocarditis and cardiomyopathy against all other reports in the WHO database.[10] Using Bayesian statistics, they found that clozapine was significantly more frequently reported in relation to cardiomyopathy and myocarditis than other drugs.[11] While the researchers concluded that further research is required in order to determine a causal effect, it is easy to see why this finding might be useful to them. Chemists could potentially save people’s lives by comparing clozapine’s chemical structure with that of other drugs to tease out exactly which chemical component in clozapine leads to the inflammation and chronic disease of the heart muscle in a patient. Isolating and eliminating this element in the production of clozapine and other drugs can avoid this result in the future.

III. Effective Regulation – Focus on use, harsh punishments for abuse.

A huge problem with America’s privacy law is that it regulates the release of data rather than its use.[12] Therefore, the critical issue that needs to be examined is: how can organizations, the government and individuals focus more closely on data use? In this section of the paper, I put forth my own three-step approach of anonymity, accountability and punishment that addresses this issue. Borrowing from Jane Yakowitz’s proposal[13], the first step is that all personal data collected by organizations should be stripped of personally identifiable data. In the next step, this anonymized data should be put in a hypothetical “protective incubator” which would only grant access to the data after an entity requests access. The entity would have to provide the legal name of its organization or self and provide at least one type of unique identifier that can speak to its legitimacy, thereby incorporating accountability in the structure. With the barrier of entry this high, a number of ways in which personal data is misused can be eliminated at this stage. Of course, this type of protective incubator model would require an agency dedicated to the cause, but even if it is not viable for a new agency to be formed for this purpose, the Federal Trade Commission’s, Bureau of Consumer Protection should be able to handle this kind of regulation. The last stage would require lawmakers to dole out strict punishment for the misuse of data. However, in order to be effective the penalty would have to be strong enough to give pause to a reasonable person.

At this point, it is prudent to note that though the data will be anonymous, the incubator (and only the incubator) would retain the ability to trace the identity of users whose information is in the database. Following HIPAA’s lead, this is so that when an organization requires identifiable information for the purposes of research in medicine, etc. it would be in a position to provide the information requested. However, the organization would have to be accountable by assuring that it would personally guard and handle the data itself. And if found in violation of this caveat, it would be subject to harsh punishment.

The key to this model is the ability of lawmakers and institutions such as the FTC to place reasonable limits on an organizations’ access to data. I believe that this paradigm shift from the focus on the release and collection of data to the focus on its use will mitigate a lot of “bad consequences” of data mining such as identity theft, fraud, unfair discrimination, etc. while retaining its research benefits.

[1] See Joseph S. Fulda, Data Mining and Privacy, 11 Alb. L.J. Sci. & Tech. 105, 106 (2000-2001).

[2] See generally Natasha Singer, Mapping, and Sharing, the Consumer Genome, http://nyti.ms/1gh80kr (Accessed March 11, 2014).

[3] See generally Daniel J. Solove, Digital Dossiers and the Dissipation of Fourth Amendment Privacy, 75 S. Cal. L. Rev. 1083 (2002).

[4] Id.

[5] See generally, Amy Wesolowski, Nathan Eagle, Andrew J. Tatem, David L. Smith, Abdisalan M. Noor, Robert W. Snow & Caroline O. Buckee, Quantifying the Impact of Human Mobility on Malaria, 338 Science 267 (2012) (available at http://www.sciencemag.org/content/338/6104/267.full.html).

[6] Id at 268.

[7] Id.

[8] Id at 269.

[9] Id at 270.

[10] See David M. Coulter, Andrew Bate, Ronald H B Meyboom, Marie Lindquist & I Ralph Edwards, Antipsychotic drugs and heart muscle disorder in international pharmacovigilance: data mining study, 322 British Med. J. 1207, 1207 (available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC31617/pdf/1207.pdf).

[11] Id at 1208.

[12] See Jane Yakowitz, Tragedy of the Data Commons, 25 Harv. J.L. & Tech. 1, 43 (available at http://jolt.law.harvard.edu/articles/pdf/v25/25HarvJLTech1.pdf).

[13] Id at 44.

The implication of the Supreme Court’s refusal to hear Google’s appeal in Oracle v. Google

Timeline setup for the initiation of the suit:

1) Java, a popular programming language, was developed by Sun Microsystems circa 1991.

2) In 2003, Android Inc. was founded by Andy Rubin, Rich Miner, Nick Sears, and Chris White.

3) In 2005, Google purchased Android and continued developing the platform.

4) In November 2007, Google released a beta version and announced that it would use some of Java’s technologies in its version of android. Sun CEO, Schwartz congratulated Google and Google released the Android software development kit (SDK) later on that same month.

5) In January 2010, Oracle purchased Sun.

Google and Sun negotiated with each other about possible partnerships and licensing deals but nothing tangible came out of the discussions. When it implemented Android, Google wrote it’s own version of Java using some of its Application Programming Interfaces (APIs). Google’s implementation of Android used the same names, organization, and functionality as Java’s APIs because Google wanted to allow developers to write their own programs for Android for the sake of interoperability. In August 2010, Oracle brought suit against Google claiming copyright (specifically, 37 Java packages of computer source code) and patent infringement. The case was assigned to District Judge, the honorable Judge William Aslup who concluded,

Of the 37 accused, 97 percent of the Android lines were new from Google and the remaining three percent were freely replicable under the merger and names doctrines. Oracle must resort, therefore, to claiming that it owns, by copyright, the exclusive right to any and all possible implementations of the taxonomy-like command structure for the 166 packages and/or any subpart thereof — even though it copyrighted only one implementation. To accept Oracle’s claim would be to allow anyone to copyright one version of code to carry out a system of commands and thereby bar all others from writing their own different versions to carry out all or part of the same commands. No holding has ever endorsed such a sweeping proposition.

The central issue in the case was whether Java’s APIs were copyrightable. And, on this central issue the district court found that though the structure, sequence, and organization [“SSO”] of the Java API packages is creative and original, it is a “system or method of operation . . . and, therefore, cannot be copyrighted” under 17 U.S.C. § 102(b).

Following the District Court’s ruling, Oracle appealed the literal copying claim to the Federal Circuit Court of Appeals and Google filed a cross-appeal. Unfortunately, the appeals court reversed the district court on the central issue of the case, holding that the SSO of the Java API packages is copyrightable. The Court of Appeals rationalized its holding by addressing the district court’s holding in the following way:

Although [the district court concluded] the SSO is expressive, it is not copyrightable because it is also functional. The problem with the district court’s approach is that computer programs are by definition functional—they are all designed to accomplish some task. Indeed, the statutory definition of “computer program” acknowledges that they function “to bring about a certain result.” See 17 U.S.C. § 101. If we were to accept the district court’s suggestion that a computer program is uncopyrightable simply because it “carr[ies] out pre-assigned functions,” no computer program is protectable. That result contradicts Congress’s express intent to provide copyright protection to computer programs, as well as binding Ninth Circuit case law finding computer programs copyrightable, despite their utilitarian or functional purpose.

First of all, I take issue with the appeals court’s “no computer program is protectable” language it attributes to the district court. This is NOT what the district court was advocating for in its holding. In fact, note Judge Aslup’s language in his conclusion:

It does not hold that the structure, sequence and organization of all computer programs may be stolen. Rather, it holds on the specific facts of this case, the particular elements replicated by Google were free for all to use under the Copyright Act.

Second of all, instead of “this result” [i.e. the district court’s well-reasoned holding] contradicting Congress’s express intent to provide copyright protection to computer programs, it actually complements it. Congress recognized the tension at the heart of the copyright doctrine and by setting up § 102(b) left it for the courts to resolve on a case-by-case basis through the development of case law, which is adaptive. For example, much of cyber law’s doctrine today did not exist a mere 50 years ago.

In October 2014, Google petitioned the United States Supreme Court [SCOTUS] to hear the case. However, on June 29th, 2015, the Supreme Court declined to hear the case. Now while this does not mean that Oracle won the lawsuit, it means that the case will return to the district court for a trial limited to Google’s fair use defense. Implications? Copyrightable APIs means less interoperability for programmers, almost all of whom use APIs to make their software work with other software. Less interoperability in turn means less innovation. In fact over the past 20 years, in light of landmark software copyright cases such as Computer Associates v. Altai (1992) and Lotus v. Borland (1996), programmers had come to understand that copyright did not protect program elements necessary for “interoperability,” based on which they freely copied these elements, which in turn encouraged tremendous innovation in this area.

Aside: Open source software development is a way of life for some developers. In fact, the world wide web as we know it today was largely created because of open-source software development with Tim Berners-Lee contributing his HTML code development as the original platform upon which the internet is now built.

Now, let’s dive into why the district court’s ruling made more sense than that of the appeals court:

1) The 37 Java APIs should not be copyrightable because they constitute a system, process, procedure, or method of operation foreclosed from protection by 17 U.S.C. § 102(b). Lotus v. Borland is a United States Supreme Court case that tested the extent of software copyright which established that copyright does not extend to the text or layout of a program’s menus. In Lotus, the court said:

We think that “method of operation,” as that term is used in § 102(b), refers to the means by which a person operates something, whether it be a car, a food processor, or a computer. Thus a text describing how to operate something would not extend copyright protection to the method of operation itself; other people would be free to employ that method and to describe it in their own words. Similarly, if a new method of operation is used rather than described, other people would still be free to employ or describe that method.

This train of thought can be extended to the Google v. Oracle case. Java’s APIs helps programmers operate software, therefore copyright protection should not be extended to these APIs.

2) Notice the district court’s reasoning: “Of the 37 accused, 97 percent of the Android lines were new from Google and the remaining three percent were freely replicable under the merger and names doctrines.” And, if that wasn’t enough for Google to win the case, add to this the court’s holding in Lotus:

The fact that Lotus developers could have designed the Lotus menu command hierarchy differently is immaterial to the question of whether it is a “method of operation.”

This line of reasoning when applied to our case translates into the following: even if Google had decided to write its own version of Java for these 37 APIs, they would still be considered a “method of operation” which takes us back to #1 above. It also leads us back to the issue of interoperability. Without knowledge of the API, or the ability to reverse engineer it, the “interoperability” of programs will undoubtedly be limited. Therefore, as past precedent as held and as is applicable in this case, if specific words (or in this case a system of tools and resources in the operating system) are essential to operating something, then they are part of a “method of operation” and, as such, are unprotectable. See Lotus.

3) Article 1, Section 8, Clause 8 of the United States Constitution provides that Congress has the power to:

promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

In a landmark copyright case, Feist v. Rural Telephone Service Co., the court explained:

The primary objective of copyright is not to reward the labor of authors, but to promote the Progress of Science and useful Arts. To this end, copyright assures authors the right to their original expression, but encourages others to build freely upon the ideas and information conveyed by a work.

This last part about copyright encouraging others to build freely upon the ideas and information conveyed by a work goes back to the idea of innovation discussed above. If the court of appeals’ ruling in this case is taken as the rule of thumb in all future cases that deal with the same issue, then not only will this ruling severely limit innovation, it will so so at the expense of hurting smaller business models such as start-ups who cannot afford to litigate and defend on fair use grounds like Google can.

4) In Lotus, Circuit judge, Boudin, concurred with the majority and added another well reasoned line of thought into the mix:

If we want more of an intellectual product, a temporary monopoly for the creator provides incentives for others to create other, different items in this class. But the “cost” side of the equation may be different where one places a very high value on public access to a useful innovation that may be the most efficient means of performing a given task. Thus, the argument for extending protection may be the same; but the stakes on the other side are much higher.

In this case, Java is a well known programming language and its APIs are widely used by programmers. In fact, as mentioned before, the very reason that Google decided to leave the 37 Java APIs unaltered was because it wanted the developers to write their own programs for Android for the sake of interoperability. Indeed the stakes are much higher when you take this interoperability away.

5) Finally, I will concede that it is true that any use of Java’s APIs by Google deprives Oracle from a portion of its profits. However, this aspect of “reward for the labor of the creator” is just one aspect of copyright. But it is not the only one, given that according to Congress, copyright encourages other to build freely upon the ideas and information conveyed by a work.

TL;DR = Java’s APIs should NOT have been held copyrightable as they constitute a system, process, procedure, or method of operation foreclosed from protection by 17 U.S.C. § 102(b). The Supreme Court’s refusal to hear Google’s appeal based on the  court of appeals’ decision has tremendous potential to negatively affect innovation and interoperability as it relates to software development.

Supreme Court ruling in Obergefell v. Hodges. June 26, 2015. A Historic day for America.

“I can only note that the past is beautiful because one never realises an emotion at the time. It expands later, and thus we don’t have complete emotions about the present, only about the past.”

– Virginia Woolf.

On 28th June 1969, the LGBT community in New York rioted following a police raid on the Stonewall Inn, a gay bar at 43 Christopher Street. Today that day is commemorated in an annual tradition more affably known as pride. On 26th June, 2015, the Supreme Court of the United States, in a 5-4 split decision, ruled for marriage equality, making United States the 23rd country in the world where same-sex marriage is legal. We have indeed come a long way as a nation from that summer day in 1969. Sure, it took us 46 years but radical reform does not happen overnight.

1) On 21st September 1996, the 104th United States Congress enacted the Defense of Marriage Act (DOMA). DOMA’s section 3 defined marriage as “a legal union between one man and one woman as husband and wife,” and the word “‘spouse’ as a person of the opposite sex who is a husband or a wife.’’ DOMA’s section 2 proclaimed that no state needed to recognize a legal marriage between a same sex couple performed in another state.

2) On June 26th, 2013, United States v. Windsor, a landmark Supreme Court case, held DOMA’s section 3 unconstitutional. The reason? DOMA’s section 3 defining “marriage” and “spouse” violated principles of Equal Protection by treating relationships that had equal status under state law differently under federal law. The majority opinion was authored by Justices Kennedy, Ginsburg, Breyer, Sotomayor, and Kagan. Chief Justice Roberts and Justices Scalia, Thomas, and Alito dissented.

*The Equal Protection Clause is part of the Fourteenth Amendment to the United States Constitution. The clause took effect in 1868 and provides that no state shall deny to any person within its jurisdiction “the equal protection of the laws.”

Here is some important language from the majority opinion:

“For same-sex couples who wished to be married, the State acted to give their lawful conduct a lawful status. This status is a far-reaching legal acknowledgment of the intimate relationship between two people, a relationship deemed by the State worthy of dignity in the community equal with all other marriages. It reflects both the community’s considered perspective on the historical roots of the institution of marriage and its evolving understanding of the meaning of equality.”

– Page 24.

“DOMA singles out a class of persons deemed by a State entitled to recognition and protection to enhance their own liberty. It imposes a disability on the class by refusing to acknowledge a status the State finds to be dignified and proper. DOMA instructs all federal officials, and indeed all persons with whom same-sex couples interact, including their own children, that their marriage is less worthy than the marriages of others. The federal statute is invalid, for no legitimate purpose overcomes the purpose and effect to disparage and to injure those whom the State, by its mar- riage laws, sought to protect in personhood and dignity. By seeking to displace this protection and treating those persons as living in marriages less respected than others, the federal statute is in violation of the Fifth Amendment. This opinion and its holding are confined to those lawful marriages.”

– Page 29.

And here is a choice quote from Justice Scalia’s dissenting opinion:

“A reminder that disagreement over something so fundamental as marriage can still be politically legitimate would have been a fit task for what in earlier times was called the judicial temperament. We might have covered ourselves with honor today, by promising all sides of this debate that it was theirs to settle and that we would respect their resolution. We might have let the People decide. But that the majority will not do. Some will rejoice in today’s decision, and some will despair at it; that is the nature of a controversy that matters so much to so many. But the Court has cheated both sides, robbing the winners of an honest victory, and the losers of the peace that comes from a fair defeat. We owed both of them better.”

– Page 59.

I started law school in 2013. Ours was the first law school class that read, analyzed, and debated the United States v. Windsor case in Constitutional Law in a law school classroom setting. Then in 2015, I was lucky enough to be among the chosen few to intern for the San Francisco City Attorney’s Office for the summer of 2015. San Francisco City Attorney, Dennis Herrera, filed the first government-initiated challenge to marriage laws that discriminate against same-sex couples in American history, and the SF City Attorney’s office holds the unique distinction of being the only legal team involved as a party in aspect of the legal fight in California.

3) June 26th, 2015. Exactly two years later, on the anniversary of the Windsor decision, the rest of DOMA was held unconstitutional by the Supreme Court ruling in Obergefell v. Hodges in a 5-4 split. The split? The majority opinion was authored by Justices Kennedy, Ginsburg, Breyer, Sotomayor, and Kagan. Chief Justice Roberts and Justices Scalia, Thomas, and Alito dissented. Once again.

When I woke up this morning, I tried to skim the syllabus of the landmark Supreme Court Hodges ruling from D.C. earlier in the day (Yay. West Coast time), but I didn’t have enough time to get through it all before I left for work. At 9:34 am, as I sat pouring over countless pages of a deposition, my supervisor took me and the other two interns over to City Hall. And boy, am I glad she did. Mayor Lee, City Attorney Herrera, and a lot of other city officials who have tirelessly fought the battle in favor of marriage equality since 2004 gave heartfelt speeches about the SCOTUS decision in Hodges.

Here is some powerful language from the decision itself:

“The nature of injustice is that we may not always see it in our own times. The generations that wrote and ratified the Bill of Rights and the Fourteenth Amendment did not presume to know the extent of freedom in all of its dimensions, and so they entrusted to future generations a charter protecting the right of all persons to enjoy liberty as we learn its meaning. When new insight reveals discord between the Constitution’s central protections and a received legal stricture, a claim to liberty must be addressed.”

– Page 16.

“No union is more profound than marriage, for it embodies the highest ideals of love, fidelity, devotion, sacrifice, and family. In forming a marital union, two people become something greater than once they were. As some of the petitioners in these cases demonstrate, marriage embodies a love that may endure even past death. It would misunderstand these men and women to say they disrespect the idea of marriage. Their plea is that they do respect it, respect it so deeply that they seek to find its fulfillment for themselves. Their hope is not to be condemned to live in loneliness, excluded from one of civilization’s oldest institutions. They ask for equal dignity in the eyes of the law. The Constitution grants them that right.

The judgment of the Court of Appeals for the Sixth Circuit is reversed.
It is so ordered.”

– Page 33.

And here is Justice Scalia’s dissent:

“This is a naked judicial claim to legislative — indeed, super-legislative — power; a claim fundamentally at odds with our system of government. Except as limited by a constitutional prohibition agreed to by the People, the States are free to adopt whatever laws they like, even those that offend the esteemed Justices’ “reasoned judgment.” A system of government that makes the People subordinate to a committee of nine unelected lawyers does not deserve to be called a democracy.”

– Page 73.

As I stood there in front of City Hall, soaking in the beautiful San Francisco weather, listening to the heartfelt speeches of these city officials, some of whom have been so personally involved (on so many levels) in this long fought battle, I thought about the past and I thought about the future. In 1967, San Francisco experienced the Summer of Love. In the words of Sheila Weller, “the phenomenon washed over America like a tidal wave, erasing the last dregs of the martini-sipping Mad Men era and ushering in a series of liberations and awakenings that irreversibly changed our way of life.” Today, love won and the rest of our nation joins San Francisco in its 2015 edition of a different kind of Summer of Love. Today’s SCOTUS ruling not only paved the way for a lively debate in classrooms about law and policy for decades to come like it did with its Windsor ruling, but today, SCOTUS has possibly paved a new future for our country. In fact, as I type these last few words of this blog post, I can hear fireworks and exuberant screams emanating from the streets outside. Today is definitely a day to celebrate. I look forward to this weekend’s Pride being the best one the city and country have experienced so far. Because our history just got a little richer.

P.S. – For the inquisitive, the Netherlands was the first country to legalize gay marriage in 2000.

While I cannot attach a video with this post, here’s the link to Dennis Herrera’s full speech today in front of City Hall.

20150626_092921

San Francisco City Officials issue a statement on the United States Supreme Court decision in the Obergefell v. Hodges marriage equality case. June 26th, 2015.

20150626_093423

City Attorney, Dennis Herrera, addressing the crowd following the United States Supreme Court decision in the Obergefell v. Hodges marriage equality case. June 26, 2015.

Joss Whedon’s Copyright Infringement lawsuit for “Cabin in the Woods”: A Preliminary Analysis

Joss Whedon and Lionsgate were just hit with a copyright infringement lawsuit for his 2012 movie, Cabin in the Woods. I read the complaint and I have a feeling that Peter Gallagher (Plaintiff)  might win this one against Whedon and Lionsgate (Defendants). Although, I don’t have time to do a full analysis right now (I have to wake up at 7:00 am for my copyright law class and it’s already 2:00 am), here are my initial thoughts on this matter:

1) In order to prove copyright infringement, Gallagher has to meet the two prong test laid down by Feist Publications, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 361 (1991): “(1) ownership of a valid copyright, and (2) copying of constituent elements of the work that are original.”

2) According to the complaint, Peter Gallagher owns a valid copyright in his novel: The Little White Trip: A Night In the Pines. Pursuant to 17 U.S.C. §106, Copyright holders enjoy the exclusive rights of 1) reproduction, 2) making derivative works, 3) distribution, 4) public performance, 5) and public display. Cabin in the Woods violates all five of Gallagher’s exclusive rights. Therefore, Gallagher meets Feist‘s first prong.

3) In order to meet Feist’s second prong, Gallagher needs to prove that Whedon and Lionsgate copied his novel as a factual matter. He can show this through direct evidence or if that is unavailable, by evidence that Whedon and Lionsgate had access to the copyrighted work and that their movie Cabin in the Woods is so similar that the United States District Court for the Central District of California may infer that there was probative similarity (or in other words, factual copying). In the complaint, Gallagher asserts that a majority of his book sales were made in Santa Monica, a short distance from where Whedon resides and where Lionsgate’s maintains its principal place of business. (see Complaint at 3, Gallagher v. Lionsgate Entertainment Inc., et al., Case No. 2:15-cv-02739 (2015)). Once Gallagher proves probative similarity, he must also prove substantially similarity. And, here there can be no doubt that there was both quantitative and qualitative copying. (see Complaint at 11-15, Gallagher v. Lionsgate Entertainment Inc., et al., Case No. 2:15-cv-02739 (2015)).

4) Whedon and Lionsgate will likely not be able to claim a fair use defense pursuant to 17 U.S.C. §107 either. Factors 1, 3, and 4 weigh heavily against them. (I’ll expound on this later).

Other relevant analysis:

1) This case strongly reminded me of the Nichols v. Universal Pictures Corp. et al., 45 F.2d 119 (2d Cir. 1930) case. There, Judge Learned Hand laid down the rule for the “abstractions” test (idea-expression dichotomy). Essentially the test embodies the following:

“Upon any work . . . a great number of patterns of increasing generality will fit equally well, as more and more of the incident is left out. The last may perhaps be no more than the most general statement of what the [work] is about, and at times might consist only of its title; but there is a point in this series of abstractions where they are no longer protected, since otherwise the [author] could prevent the use of his “ideas,” to which, apart from their expression, his property is never extended.”

Nichols, 45 F.2d at 121.

In the complaint, it is alleged that Gallagher “developed the idea for what would become the Book… [and] subsequently created a short outline of the idea for the Book in 2004.” (Complaint at 5, Gallagher v. Lionsgate Entertainment Inc., et al., Case No. 2:15-cv-02739 (2015)). And as was mentioned above, the complaint goes on to compare the amount of substantial similarities between Gallagher’s book and Whedon’s movie. (see Complaint at 11-15, Gallagher v. Lionsgate Entertainment Inc., et al., Case No. 2:15-cv-02739 (2015)). At least after a cursory glance, it seems like Whedon took more of Gallagher’s expression of his idea rather than the idea itself. (Ideas are not copyrightable – only patentable! Expression is copyrightable).

2) Gallagher’s characters are also copyrightable. (see Anderson v. Stallone, 11 U.S.P.Q.2d 1161 (C.D. Cal. 1989)). The court in Stallone determined that the characters from the original copyrighted work were afforded copyright protection using Judge Learned Hand’s standard in Nichols, i.e., copyright protection is afforded when a character is developed with enough specificity to constitute protectable expression. Id. I have not read Gallagher’s The Little White Trip: A Night In the Pines, but if it is clear to the United States District for the Central District of California that Whedon’s movie was a derivative work then it can be given no protection. (see 17 U.S.C.  §106(2), establishing derivative works [to be] the exclusive privilege of the copyright holder). Furthermore, the court could even apply the ordinary observer test to make this determination. (see Lyons Partnership v. Morris Costumes, 243 F.3d 789 (2001), see also Dawson v. Hinshaw Music, Inc., 905 F.2d 731, 733 (4th Cir. 1990) holding The notion of intrinsic similarity [redacted] requires the court to inquire into “the `total concept and feel’ of the works,” but only as seen through the eyes of the ordinary observer.)

3) Curiously enough, Cabin in the Woods was released on April 13th, 2012 and this complaint was filed on April 13th, 2015. According to 17 U.S.C. §507(b), the statute of limitations for a copyright infringement action is 3 years. I wonder why Gallagher waited until the very last minute to file his suit.