Why There’s a New Spider-Man or X-Men Movie Every Few Years

Instead of major legal issues, let’s look at something fun this Friday.  Many, many years ago, Marvel Comics decided to start licensing its properties out to film companies looking to turn those characters into movie franchises.  Marvel was going through a rough period financially, and the licensing deals provided them with a much needed influx of cash.  As a result, the film-going public got treated to 20th Century Fox’s version of the X-Men and Sony’s take on Spider-Man.  Fans also got to endure  inferior versions of other franchises, such as the Fantastic Four and Daredevil.

At some point during the 2000s, Marvel decided to make their own comic book movies (starting with the excellent Iron Man movie).  During this time, both the Spider-Man and X-Men franchises received reboots.  These reboots received quite a bit of mockery from fans, since in both cases there wasn’t a very significant gap in time between the reboot and one of the previous movies.  In Spider-Man’s case, Spider-Man 3 came out in 2007 with The Amazing Spider-Man (the reboot) coming out in 2012.  That is only a five year difference.  That five years, however, turns out to be relatively important.

The terms of Sony’s agreement with Marvel/Disney are not public knowledge (I would appreciate the information if anyone has access to it, though this message board post tries to deduce the timelines: http://forums.superherohype.com/showthread.php?t=452113), but insiders believe them to be the following: Sony’s licensing deal with Marvel allows Sony to keep the rights to Spider-Man on the condition that Sony continues making at least one new Spider-Man movie every five years and have a Spider-Man movie in production every two years.  If Sony ever failed to do so, then the rights to Spider-Man would return to Marvel.  Now, if you look really closely at the release dates for Spider-Man 3 (May 4, 2007) and The Amazing Spider-Man (July 3, 2012), you’ll notice that Sony technically did not make the deadline.  Well, Sony cut a special deal for that extension (http://comics.cosmicbooknews.com/content/disney-buys-amazing-spider-man-merchandise-rights-sony-keeps-movie-deal).  Disney got the merchandising animation rights back from Sony in exchange for an extension of the deadline.  Sony needed the extra time when Sam Raimi’s Spider-Man 4 failed to materialize.

The Fantastic Four illustrate the other options these studios potentially face with these deals.  In 1994, Roger Corman made a cheap Fantastic Four movie (many believe the budget to only be around $1.5 million).  This movie was never shown publicly, and appears to have been made to satisfy the licensing deal with Marvel (http://movies.yahoo.com/blogs/movie-news/the-worst-superhero-movie-you-never-saw-200908507.html).  There were even a number of references to this bit of film trivia in the most recent season of Arrested Development.  It is possible that Sony wanted to avoid going down this path with Spider-Man.

The effect of these licensing deals is that Marvel characters licensed out before Marvel decided to make movies are likely going to remain with their current studios.  X-Men and Spider-Man have been fantastically profitable for Fox and Sony respectively, and show no sign of slowing down.  It is in these studios’ interest to release new movies in these franchises frequently, which ensure that these characters will likely not go back to Marvel.  In fact, Sony intends to try to apply Disney’s Avengers strategy to Spider-Man (http://www.avclub.com/article/sony-officially-announces-plans-to-turn-spiderman-106478).  Between the licensing requirements and potential profits, we will likely continue to see these movies on an almost annual basis.

 

Advertisements

Data Breach Lawsuit Followup

In light of the Department of Justice’s decision to look further into the Target data breach (http://thehill.com/blogs/hillicon-valley/technology/196830-holder-doj-investigating-target-breach), this seemed like a relevant topic to continue discussing.  One problem any data breach case has is proving sufficient damages to receive an actual hearing from the court.  A famous court case from last year, Clapper v. Amnesty International (http://www.datasecuritylawjournal.com/files/2013/12/Clapper-USSCt.pdf), limited what plaintiffs could claim as an injury.  Clapper involved various civil rights advocacy organizations suing the NSA for privacy violations.  The Supreme Court opted to dismiss the case because the parties could not prove cognizable harm due to a lack of provable instances of spying by the NSA (it should be noted that this court case was decided before Snowden made such instances public).  Basically, Clapper says that a plaintiff must have a concrete example of the injury before they can seek redress.  If the plaintiff lacks this concrete harm, then they lack standing.  Some defense attorneys have already started putting Clapper’s holding to use in regard to data privacy issues (http://blogs.reuters.com/alison-frankel/2013/03/12/how-scotus-wiretap-ruling-helps-internet-privacy-defendants/).

This standard creates a relatively high standard for plaintiffs in these data breach cases.  Without some concrete injury, such as one of the hackers using their identity, they cannot claim standing for Target’s data breach.  This makes pursuing a class action much harder, since it potentially eliminates a great portion of the class.  There might be many individuals who had their data taken in the breach that cannot state a concrete harm that occurred to them.  It also doesn’t help these plaintiffs that purely economic losses are often barred from recovery (known as the economic loss rule).

With this in mind, there was an interesting development regarding another data breach case: In Re Sony Gaming Networks.  What makes this case surprising is that it managed to survive a motion to dismiss, though only after the judge threw out 45 of the claims.  According to Eric Goldman, a big reason why the case survived dismissal is that Sony made some pretty big promises regarding their level of security in various user and privacy agreements (http://blog.ericgoldman.org/archives/2014/01/sony-playstation-data-breach-lawsuit-whittled-down-but-moves-forward.htm).  It’s hard to say what will come out of the Sony case, but it may teach companies not to promise too much to their customers when selling a product.

Cybersecurity Liability: Is There a Duty of Care for Customer Information?

One of the more prominent pieces of news from the holidays was how two prominent retailers, Target and Neiman Marcus, had their credit card databases hacked (http://www.dallasnews.com/business/retail/20140117-neiman-marcus-target-credit-breaches-likely-part-of-broader-hacking-attack.ece).  Experts don’t even know the entire extent of the breach.  Now, both companies face law suits regarding their role in the breach (http://www.latimes.com/business/money/la-fi-mo-neiman-marcus-target-breach-20140114,0,6374207.story).

The announcement of these lawsuits, as well as the high degree of legal uncertainty, led to some interesting questions.  What would be the reasoning behind the tort liability?  How would an attorney establish liability under current torts law?  The lawsuit mentioned by the Los Angeles Times, filed by a Seattle law firm, claims that security experts warned Target of the security flaw that the hackers exploited.

Establishing liability for negligence generally has three requirements: duty of care, breach, causation, and harm.  Duty of care, in the broadest sense of the word, is a society-imposed requirement to avoid harming others by employing reasonable care.  The legal system has duties that arise through common law (from previous court cases) or through statute (laws passed by a legislature).  A breach occurs when the defendant didn’t avoid that harm, or caused the harm.  Causation requires the defendant to somehow be the cause of the injury (though this can get very complicated).  Harm means that the plaintiff must suffer some injury or damage.  There is a lot more involved, but these definitions should suffice for purposes of this blog.  For the time being, let’s focus on duty.

First, would the duty to protect others’ data represent a society imposed requirement?  Businesses would need a duty to take reasonable care of sensitive information provided to them by their customers.  At the moment, the answer to this question depends heavily on jurisdiction.  A number of jurisdictions require that the injury be reasonably foreseeable before there is a duty of care.  In other words, the companies can’t be held liable for breaches for newer or unknown attacks (see Secure My Data or Pay the Price: Consumer Remedy for Negligent Enablement of Data Breach, William & Mary Business Law Review, at 230., found here: http://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=1051&context=wmblr).  That places a fairly firm limit on the kinds of breaches that create a duty of care, essentially eliminating Zero Day exploits (called so because the first day they are used is the first day they become known) or innovative hacking techniques.  As the lawsuit against Target indicates, it does open up a possible duty of care for attacks and breaches based on publicly known exploits or security holes.  Foreseeability does not serve as a perfect rule of thumb.  Not every jurisdiction uses whether a duty was foreseeable as a test.  California, for example, has a different test laid out in Ballard v. Uribe, (L.A. 31799, Cal Sup Ct, (Apr., 3, 1986)), which is found here (http://online.ceb.com/calcases/C3/41C3d564.htm#MA000696).  Barring some kind of federal law setting a nationwide standard, the jurisdictional nature of this question will likely remain true.

Another major issue is what would constitute “reasonable care” in regard to data privacy.  Reasonable care is normally defined as how a prudent person would act under the same circumstances. Court cases usually help define this concept further but, unfortunately, there are not many related to data breaches.  Security standards from non-regulatory government agencies (such as the National Institute of Standards and Technology, or NIST) or relevant professional organizations (such as the Institute of Electrical and Electronics Engineers, or IEEE) represent a useful starting point for determining reasonable care (Secure My Data, at 225-6).  In the case of credit card and Point of Service (PoS) security, a judge could theoretically look at the Payment Card Industry (PCI) Data Security Standard Requirements and Security Assessment Procedures (a group dedicated to setting security best practices for credit cards) or NIST to determine whether the defendant’s security standards constitute reasonable care.  The court could also evaluate the specific implementation of various security practices.  In some cases, following or ignoring the advice of experts regarding failure to fix vulnerabilities or improve security services could serve as evidence that the company was engaged in reasonable care.  Reasonable care does not require that the company have perfect security, and defendants could potentially turn to experts to determine when certain actions that would have prevented a breach are unreasonable.  For example, failing to maintain strong passwords or failing to encrypt customer information might qualify render a company liable because they did not exercise reasonable care in protecting consumer data.  Failing to require all employees to maintain 30 plus character passwords that they have to change every two weeks would not constitute reasonable care.

There is a lot more to this, so let’s resume with part two on Friday.

Net Neutrality Followup

On Wednesday, this blog discussed the DC Circuit Court’s ruling in Verizon v. FCC,  Verizon v. FCC dealt with the FCC’s 2010 Open Internet Order established a number of rules designed to prevent certain practices, such as network discrimination and blocking practices, that would threaten an open internet.  The DC Circuit ended up ruling that the FCC lacked the authority to put these regulations in place because these regulations represented common carrier restrictions, and the FCC did not define broadband ISPs as common carriers.  The court remanded back to the FCC, giving them an opportunity to rewrite the Open Internet Order if they so choose.

Where does the FCC go now?  Their obvious choice in this case is to simply rewrite the regulations to fit what the court found acceptable.  The DC Circuit did not invalidate the FCC’s general authority to regulate internet providers.  The judge simply wrote that, from a technical standpoint, they could not regulate the ISPs as they had defined them.  Redefine broadband ISPs to common carriers, and there is not problem from the DC Circuit’s perspective.  

As Ars Technica points out, there are some practical issues with this solution.  The ISPs will likely fight the reclassification with the same fervor with which they fought the Open Internet Order.  The FCC may wish to avoid that particular fight.  The FCC could also try to convince the ISPs to accept some kind of voluntary agreement to preserve net neutrality.  These agreements would likely contain watered down versions of the rules presented in the Open Internet Order.  For example, the voluntary agreements may allow for the ISPs to still pursue tiered services (charging for better access) but place restrictions on the fee arrangement.  The nature of these agreements is entirely speculation on my part.  Whatever actions either party takes, FCC Chairman Tom Wheeler did little to elaborate on the FCC’s next move.

The other major option is for the FCC to appeal their case to the Supreme Court.  A victory at the Supreme Court could preserve the Open Internet Order in full.  This path is also very uncertain, given that the Supreme Court could also opt to undermine the FCC’s regulatory authority.  

At the end of the day, the FCC’s best option is to redefine broadband service providers as telecommunications companies and common carriers.  That would place ISPs well within the FCC’s regulatory authority, and would seemingly allow for the DC Circuit to support the Open Internet Order’s provisions.  The only major downside is that the ISPs would almost certainly contest such a move, and fight it vociferously.  

Enjoy your weekend.  I’ll be back on Wednesday.  

DC Circuit Rules Against Net Neutrality

Yesterday, the US Court of Appeals for the DC Circuit ruled on Verizon v. Federal Communications Commission (FCC).  This case deals with FCC order In re Preserving the Open Internet (25 F.C.C.R. 17905 (2010)), referred to as the Open Internet Order in the opinion.  The Open Internet Order (“Order”, found here: https://www.fcc.gov/document/preserving-open-internet) imposed what the media commonly refers to as “net neutrality” on broadband Internet Service Providers (ISPs).  The Order required these ISPs banned various discriminatory practices in terms of network allocation, as well as imposed various anti-blocking and disclosure requirements.  The idea is to prevent the ISPs from prioritizing internet traffic from certain sources. The major ISPs (such as Verizon, Comcast, and Time Warner Cable) challenged the Order in court.

The DC Circuit’s ruling (http://www.cadc.uscourts.gov/internet/opinions.nsf/3AF8B4D938CDEEA685257C6000532062/$file/11-1355-1474943.pdf) came down to a very complicated piece of analysis: whether broadband ISPs qualify as a “common carrier.”  A common carrier is a legal term of art for certain providers of public infrastructure (with the case giving the examples of ferrymen and innkeepers as the inspiration for the concept).  These carriers have certain special obligations imposed on them, one of which is to provide their services to anyone requesting them and to charge a reasonable rate in doing so.  The Communications Act of 1936 (which created the FCC) imposes this status on telecommunications companies.  The Communications Act also provides an incredibly unhelpful definition of a common carrier (“any person engaged as a common carrier for hire” per 47 U.S.C. §153(11)).  National Association of Regulatory Utility Commissioners v. FCC, 525 F.2d 630, 642(D.C. Cir. 1976) provides the legal reasoning for how the DC Circuit determines who is a common carrier and who is not: whether the company can make individualized decisions on the terms of a deal.

The court’s decision hinges on whether broadband internet qualifies as such a common carrier service.  One of the issues that the ruling points out is that the FCC made an initial ruling that broadband internet is not a common carrier service.  The DC Circuit could then strike the Order down if they found that it imposed common carrier obligations on the ISPs.  From a pure legal reasoning standpoint, the DC Circuit is correct.  The FCC tried to make an argument that requiring the ISPs to not discriminate between edge providers on their network (removing the individualized nature of the edge provider’s relationship with the ISP) while saying that these weren’t common carrier regulations.  That argument definitely misses the mark.  The FCC probably wanted to avoid a fight over all the other regulatory requirements that common carrier status entails.  Fortunately, the DC Circuit opted to remand the matter back to the FCC.  The FCC can read the ruling, and attempt to modify their crafting of net neutrality rules accordingly.

The ruling completely ignores why the major ISPs opted to challenge the Order in the first place.  As Bloomberg notes (http://www.bloomberg.com/news/2014-01-14/verizon-victory-on-net-neutrality-rules-seen-as-loss-for-netflix.html), the ISPs have floated any number of plans to profit off of tiered access plans.  Many of these plans revolve around the idea of charging for faster access to the ISP’s network.  There are a number of criticisms for these plans: they’ll increase the cost of using web-based services for customers, they’ll provide preferential treatment to established companies, and so on.  There is, however, an additional worry.  A number of these ISPs operate their own streaming services.  Verizon, for example, owns Redbox’s streaming service.  Comcast runs their own service called XFinity Streampix.  These services create a conflict of interest for the ISPs, since they could provide preferential treatment to their own streaming services.  The ISPs have already attempted such actions in the past, such as when Comcast opted not to count XFinity Live video on XBox Live against their data cap (http://www.gamepolitics.com/2012/03/27/comcast-defends-xfinity-xbox-live-against-net-neutrality-concerns).  What makes the decision to ignore the policy reasons for the Order even stranger is that the Court addresses these concerns early in the ruling, particularly during the section dealing with the FCC’s regulatory authority.

Now we move to the million dollar question: will the FCC alter its net neutrality rules and, if so, in what manner?  The court definitely left a regulatory path open to the FCC, in the form of stating that common carrier status applies to how ISPs act towards edge providers.  I should be interesting to see how the matter progresses from here.

A Challenge to Online Consent

There is a series of cases (http://www.bloomberg.com/news/2014-01-08/clickable-consent-at-risk-in-internet-privacy-lawsuits.html) developing in the Ninth Circuit over what constitutes consent, particularly in regard to how various sites (such as LinkedIn and Facebook) information collected from their users.  The article describes a surge of such cases (over 200) over the past 18 months.  What makes these cases unique is that the judge in the case, Judge Lucy Koh, allowed them to survive a motion to dismiss.  These cases (which may get lumped into a broader class action down the line) intend to explore exactly what agreeing to Terms of Service on a website entails.

Bloomberg doesn’t provide an extensive amount of detail about the cases in question, but does provide some common threads.  The first major connecting point between many of these cases is that the websites used information provided by the user in a way mentioned by the Terms of Service.  One of the cases mentioned includes information provided by the user (such as their name and picture) in advertisements for the website.  One example provided is a 2011 case against Facebook that dealt with Facebook’s practice of using a user’s friends in “sponsored posts” (basically ads).  Facebook alleged that the Terms of Service allowed them to use a person’s profile on the website in such a manner.  The plaintiffs argued that their name and image had value that Facebook’s practices deprived from them.  Judge Koh agreed with the plaintiffs in this regard.

The second, and more potentially troubling, common nexus between these cases involves information not knowingly provided by the user.  The example Bloomberg gave for this situation involved LinkedIn using the user’s 11 year old son’s email address from a user’s email address book to pitch that user services from LinkedIn.  The user didn’t remember allowing LinkedIn to look at the contents of his online address book, and the email address was unused otherwise.  The question in the case involves how LinkedIn acquired the address initially and whether this user agreed to share that information in the first place.

These cases point to the need for a more thorough definition of consent in an internet-based context.  Customers have to know what they’re agreeing to before they can knowingly agree to anything.  In other parts of contract law, the contract usually has to describe what the offeree agrees to if they sign.  A contract, after all, only includes what is in the four corners of the document.  There is no good reason, either policy-wise or legal, why this should not be the case for service contracts involving websites.  So far, the only hard and fast rule for consent appears to be “be clear about how you use the data and ask permission before you use it” (from the Gmail case back in September: http://www.mediapost.com/publications/article/210095/judge-rules-gmail-ads-might-violate-privacy.html).

That rule of thumb only serves as a starting point.  Optimally, the increase of cases will allow for the legal system to further clarify when an individual validly consents to having their information used by a website.  As with any aspect of contracting, it helps both parties to have clear rules in order to know their rights and obligations under the law.

Is Software a Good or a Service?

Welcome back.  I hope everyone had a good a great holiday season.  I certainly enjoyed my vacation, but now it’s time to return to writing.

How the legal system should regard software is something of a conundrum.  On its face, this appears to be a simple matter to determine.  Software often greatly resembles a good, which the Uniform Commercial Code (UCC) defines as “all things . . . which are movable at the time of identification to the contract for sale . . . .” UCC § 2-105.  A service is usually work paid for by another person, though it includes anything that is not a good.  Software seems to meet this minimum requirement for most people (particularly if it’s stored on physical media like a DVD), though that potentially gets more complicated when one includes digitally distributed software.  Still, one could argue that a game bought off Steam counts as an item that was movable at time of identification, even if “movable” refers to transmission over the internet.  Many commercial software vendors argue that their software is a service, and utilize licensing schemes to maintain control over their product.  Goods and services get treated differently by the legal system, and receive different protections.  This post won’t go into the intricacies of the UCC, or how to treat service contracts.  Instead, let’s focus on how trademark law deals with the need to determine whether software represents a good or a service.

As Eric Goldman points out (http://blog.ericgoldman.org/archives/2014/01/is-it-software-is-it-a-service-it-matters-for-trademark-registration-purposes.htm), trademark law requires a “use in commerce” for someone to register their trademark.  Goldman uses a case between NetJets, Inc. and Intellijet (located here: http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1617&context=historical), where NetJets used Intellijet’s name for software NetJets sold (though they capitalized every letter in INTELLIJET).  Intellijet argued that NetJets did not have an enforceable right in the name because NetJets did not use the name in commerce as required by the Lantham Act.  The court ruled for Intellijet, finding that NetJets did not market their software as a separate product.  INTELLIJET mostly functioned as internal software, though customers could see the name when they accessed a NetJets portal to purchase other software.  Basically, they tried to include a service under a mark for a good and got called out on it.

The major lesson of this case is that the applicant should be more careful in how they file for trademark protection if they wish to register corporate software for trademark protection.  However, this ruling means that trademark law does not provide a definitive answer to the question.  The court seems to consider the software at trial to represent a service, but the software is inherently service oriented.  NetJets’ web portal is primarily designed to help customers access portions of the website.  The case doesn’t deal with consumer software, only software that is ancillary to whatever offering the company makes.  That remains an open question.  Given the prominence (and potential restrictions) of End User Licensing Agreements (EULA), that question remains important to answer.  The National Conference of Commissioners on Uniform State Laws attempted to clarify the issue with the Uniform Computer Information Transactions Act (UCITA).  UCITA attempted to clarify software’s ambiguity in the law’s good/service dichotomy by modifying the UCC Article 2 (which deals with contract formation, among other contracting issues).  UCITA specifically stated that publishers had to include an EULA if they wanted customers to treat their software as a service (making the license valid).  If there was no EULA, courts would consider the software to be a good.  Not surprisingly, UCITA encountered a lot of opposition (not without reason, as explained here: http://www.jamesshuggins.com/h/tek1/ucita.htm) and only got passed in two states (Virginia and Maryland, as seen here: http://www.computerworld.com/s/article/83870/Sponsor_s_surrender_won_t_end_UCITA_battle?taxonomyId=070).  As a result, whether software is a good or a service depends largely on the jurisdiction.