Redefining Efficiency In Civil Procedure

Brooke D. Coleman, The Efficiency Norm, 56 B.C. L. Rev. 1777 (2015), available at SSRN.

In his year end report, Chief Justice Roberts stated that the 2015 civil procedure amendments were “to address the most serious impediments to just, speedy, and efficient resolution of civil disputes.” Roberts clearly was referring to Rule 1 of the Federal Rules of Civil Procedure, which states that the rules are to be interpreted to achieve a “just, speedy, and inexpensive determination.” In other words, Roberts equated efficiency with inexpensive. The Chief Justice’s comment illustrates the “efficiency norm” problem that Professor Coleman has addressed in her noteworthy article. The courts, the rulemakers, and Congress have defined efficiency too narrowly, and this definition has resulted in fewer trials and an anti-plaintiff bias.

In her article, Coleman considers the important question of how the concept of efficiency should affect litigation. She first recognizes that the number of cases filed in federal court has increased significantly since the rules were adopted in 1938—some of this as the result of the creation of new substantive rights. This phenomenon has lead to criticism of the litigation system. Influenced by and participating in this criticism, the institutional actors of the rulemakers, the judiciary, and Congress have promoted “the efficiency norm.” Under this mandate, they make changes in the name of efficiency and focus on just cost—more specifically on only certain costs—the costs to corporate or governmental defendants.

Coleman aptly illustrates how institutional actors have employed this norm. For example, in their recent decision to change the discovery rule to add proportionality as a consideration, the rulemakers focused on cost to defendants and failed to consider the possible costs to plaintiffs of not receiving necessary discovery. Similarly, in TwomblyIqbal, and Concepcion, the Supreme Court discussed only the costs to businesses, not the effect of the possible changes on plaintiffs such as having more cases dismissed without the opportunity to receive important discovery. Finally, Congress also has narrowly viewed efficiency. For example, the Prison Litigation Reform Act intended to decrease the costs of frivolous litigation to the federal courts but did not consider the possible cost to prisoners with meritorious claims that may be dismissed.

Coleman makes the important point that the procedural changes made in the name of efficiency or cost may not even lessen costs. For example, the new proportionality rule may increase discovery motion practice and thus costs.

Coleman also critiques how the efficiency norm is conveyed. Extensive efforts have been made to broadcast a view about the high costs of litigation to the public without also showing the other costs and benefits of litigation. While Coleman recognizes the difficulty of quantifying these other costs and benefits, she rightly argues that regardless of these problems, the other costs and benefits must be presented and considered to accurately examine the question of efficiency. Moreover, she notes that the costs to defendants are often “cherry-pick[ed]” or exaggerated—all resulting in an incomplete picture of litigation.

As previously mentioned, Coleman asserts that the efficiency norm has contributed to two presumptions in our modern litigation system. Although the original system valued trials and was receptive to plaintiffs, now, non-trial adjudication is favored over trials and there is skepticism towards plaintiffs. For example, the new proportionality rule’s focus on less discovery without viewing what plaintiffs actually need for trial disfavors trial and plaintiffs.

Coleman goes on to argue that modern adjudication is now de-democraticizing our civil justice system. Public adjudications including those that employ the public as jurors are rare. Moreover, it is difficult for a regular citizen to litigate a dispute in court. These changes create losses, including public benefits. People or companies may not abide by the law because the threat of consequences is not as great as in the past.

According to Coleman, these changes are connected to a larger issue in civil procedure—the shift from a liberal ethos to a restrictive ethos—a problem about which Professors Rick Marcus and Benjamin Spencer have written. In other work Coleman has explained that while benefiting corporations, government, and other entities, this restrictive ethos has caused certain plaintiffs—who are economically or culturally disadvantaged—to vanish.

Coleman argues that efficiency norm should be redefined. She states “efficiency—as applied to civil litigation—must take account of all of the potential costs and benefits.”

Whenever efficiency or costs are mentioned, Coleman’s words should be heeded. The rulemakers, the courts, and Congress should look at all of the potential costs and benefits.

If you are interested in Coleman’s arguments, you should also read The Perverse Effects of Efficiency in Criminal Process. There, Darryl Brown has written about how this concept of efficiency affects our criminal system, emphasizing that particular costs have been stressed and the appropriate costs and benefits have not been examined.

Cite as: Suja A. Thomas, Redefining Efficiency In Civil Procedure, JOTWELL (March 30, 2016) (reviewing Brooke D. Coleman, The Efficiency Norm, 56 B.C. L. Rev. 1777 (2015), available at SSRN),

Should We Publish All District Court Opinions?

Elizabeth McCuskey, Submerged Precedent, 16 Nev. L. Rev.  ___ (forthcoming 2016), available at SSRN.

In Submerged Precedent, Professor Elizabeth McCuskey unearths new data on the rate of remand from federal to state courts in suits alleging 28 U.S.C. § 1331 jurisdiction under a Grable & Sons theory. As part of her vigorous data collection project, McCuskey determined that substantial numbers of the district court opinions she studied never found their way into commercial databases or PACER, substantially skewing our understanding of caselaw in this area. From this starting point, she launches into an intriguing normative discussion on the need to bring this body of “submerged precedent” to the surface. She concludes with a call for a strong presumption that all reasoned district court opinions be made publically available. For those of us who study the federal courts, Submerged Precedent’s raises intriguing empirical and doctrinal questions to which we should turn our attention.

McCuskey’s study focuses upon a particular method of taking § 1331 jurisdiction in federal court. The vast majority of cases take § 1331 jurisdiction under the so-called Holmes test (i.e., vesting § 1331 jurisdiction because the plaintiff raises a federal cause of action). There exists, however, a narrow exception to the Holmes test whereby federal question jurisdiction may lie over state-law causes of action that necessarily require construction of an embedded federal issue. McCuskey focuses her work on these cases, seeking to discover the rate at which suits removed to federal court under that theory are remanded from to state court.

Instead of taking the typical appellate-court focused approach to this question, McCuskey looks solely to district court action. Her chosen jurisdictional issue is especially ripe for a district-court-focused study because, contrary to the general rule, these jurisdictional remands are not subject to appellate review. As a result, this area lacks much on-point appellate precedent.

In conducting her examination, McCuskey studies two districts, the Eastern District of Virginia and the Northern District of Illinois, looking for all remand opinions in Grable & Sons-style cases from 2002 to 2008 (hence studying activity both immediately before and after the 2005 Grable & Sons decision). Key to her study was reaching beyond both commercially available databases (e.g., Westlaw and Lexis) and the publically available data on the federal PACER system, which generally mark the boundaries for empirical judicial-work-product studies. As McCuskey details, not every district court action makes it to the publically available data on PACER, regardless of whether one is accessing for free or in the fee-driven service. In her data set, she pulls all decisions directly from dockets, a difficult collection process to say the least. Indeed, I believe her work to be the only empirical study addressing federal-court jurisdictional issues that relies upon this robust a data set.

Having collected this data, McCuskey reveals that our typical reliance upon non-docket-sourced data skews our understanding of jurisdictional decisions. She found that if one limits the data set of remand decisions to Westlaw, publically available data on PACER, and the like, it would appear that her targeted set of cases—state law causes of action with embedded federal issues—are remanded at a 62% rate. Her direct-docket-pull study, however, found that the remand rate was actually 76% for these cases. The difference between those numbers represents a body of cases that fall through the cracks in the move from the direct-docket-pulled rulings in her study to the more traditional data sources of Westlaw and publically available data on PACER. This is what McCuskey labels submerged precedent and aims to raise to the surface.

Having discussed these empirical findings and the methods used, McCuskey turns next to a normative analysis. She discusses why district court opinions, while not binding precedent, are of great value to the legal system and why docket-only decisions run counter to important rule-of-law norms such as legitimacy, transparency, equal application, consistency, and efficiency. As part of this discussion, McCuskey argues for reform. She contends that there should be a strong presumption that all reasoned decisions—as opposed to non-reason-giving minute orders—should be made publicly available. Here she relies upon the E-Government Act of 2002 as a positive-law foundation to further her normative position that rule-of-law norms require access to all sources of law, including non-precedential district court decisions.

McCuskey’s piece demands our engagement on many levels. The overwhelming study of judicial activity, and precedent in particular, focuses upon appellate decision-making. Yet, trial courts conduct the vast majority of judicial activity. This mismatch of scholarly attention and judicial activity is all the more apparent in analyses of jurisdiction. McCuskey bucks this trend. Submerged Precedent is the second, in what I hope is a long-lived series, of pieces addressing federal trial court precedent on jurisdictional issues. The fact that she takes an empirical bent, and one with novel collection practices no less, only adds to the value of the piece.

Moreover, McCuskey’s work is deeply thought provoking. I, for one, question whether her data set is limited by selection bias on a couple of scores. First, she examines two high-population, urban districts. I am not sure if this population issue matters in jurisdictional opinions, but it certainly might. Second, her cases (remand cases) are unique in that appellate review is not available. I, at least, am concerned that this fact may impact how decisions are written and the rates at which they are “submerged” vis-à-vis matters subject to appellate review. Finally, I am curious if the application of more robust empirical methodologies, when coupled with her robust collection, could yield more information from her data sets.

On the normative side of her piece, McCuskey tends to make claims about the value of “un-submerging” precedent generally based upon data from a limited slice of remand cases that are not subject to appellate review. These cases certainly make for the strongest case for full publication. But I question if the same cost-benefit analysis holds in matters where we have a robust set of appellate decisions—say, in suppression-of-evidence cases. Because appellate courts make publically available near all of their rulings (both in “published precedential” and “non-precedential” forms), in areas such as suppression, there are already thousands upon thousands of opinions such that the addition of new district court rulings would seem to add little from a rule-of-law perspective.

Additionally, while she addresses the topic, one could question whether McCuskey appropriately values the importance of not making precedent. Much of what district courts do is exercise discretion. That is to say, what they are after often is not easily captured in the values of consistency and equal application on which McCuskey focuses. From this vantage, perhaps exercises of discretion should be submerged. Indeed, her data set of jurisdictional remands, which are more rule-bound decisions than exercises of discretion, does not reach this issue in a way that is equally germane in other areas of law.

Finally, I remain curious how judges would react to the full-publication regime McCuskey advocates. Judges might resort more often to orally delivered, less-reason-giving rulings so as to avoid publication, both to avoid the time investment required in publication and the potential that published decisions would hem in the judge in future cases. I, for one, fear that such an outcome is likely, which would be a disservice to parties at little added benefit to the system.

In the end, all these potential criticisms really show is that McCuskey is a provocative and engaged scholar. Her work fills critical gaps in the jurisdictional literature in a meaningful way. I am sure, therefore, you will learn much from her scholarship. I certainly have.

Cite as: Lumen N. Mulligan, Should We Publish All District Court Opinions?, JOTWELL (March 16, 2016) (reviewing Elizabeth McCuskey, Submerged Precedent, 16 Nev. L. Rev.  ___ (forthcoming 2016), available at SSRN),

On Being Mostly Right

Samuel Bray, The Supreme Court and the New Equity, 68 Vand. L. Rev. 997 (2015).

Close only counts in horseshoes, hand-grenades, and the Supreme Court’s recent treatment of equitable remedies. So says Samuel Bray in The Supreme Court and the New Equity, where he defends fourteen Supreme Court decisions decided from 1999 to 2014 that are fraught with errors and frequently criticized, which Bray labels “the new equity cases.” The equity in these cases is “new” in two ways. First, it maintains a clear distinction between equitable and legal remedies by entrenching the “irreparable injury rule,” or the requirement that there be no adequate remedy at a law before a judge consider equitable relief. Second, it seeks to control judicial discretion by adhering strictly to the history of equitable practice, and drawing from that history rules and multi-part tests to guide the application of equitable relief.

“It is not easy to imagine,” Bray writes, “anything further from the conventional scholarly wisdom than” the doctrinal developments of the new equity cases. (P. 1008.) For one, experts had long celebrated both the death of the irreparable injury rule and the unity, for all practical purposes, of equitable and legal remedies. Bray points to Douglas Laycock’s 1991 book “The Death of the Irreparable Injury Rule” as the aristeia of a movement to tear down the barrier between equitable and legal remedies that began over a century ago. Laycock “meticulously” illustrated that the requirement to show no adequate remedy at law has no discernable impact on a judge’s decision whether or not to grant equitable relief; as Bray puts it, “[w]hen judges want to give a permanent injunction, they never find legal remedies adequate.” (P. 1006.) Even the American Law Institute criticized the irreparable injury showing as “antiquated” and “spurious” in its Restatement (Third) of Restitution and Unjust Enrichment.

For two, the Court’s history of equitable practice is marred by misunderstandings and clear errors. Throughout the new equity cases, the Court has said things that are objectively and discernably incorrect about equitable practice—for example, despite the Court’s insistence on the distinction, mislabeling certain legal remedies as equitable remedies and vice versa—and has “restated” tests that, though made up of familiar elements, had never been stated before.

That much has already been said by others; Bray’s contribution is his defense of these new equity cases. As Bray puts it, the Court has intentionally or unintentionally fabricated an idealized history of equity that, while not accurate, is useful for adjudicating cases. It is not a historian’s history, but a judge’s history that smooths out many centuries of equity practice to make it easier to digest. Bray likens it to a tailor who has repaired a tattered cloth with patches and seams, so that it may be cut to use; the resulting “new old” coat may not be handsome, but it is better suited to its purpose.

Bray drives this point home by highlighting the growing consensus among members of the Court across the new equity cases, a point not yet covered by the literature. The Court began bitterly divided in the 1999 case Grupo Mexicano de Desarrollo, SA v. Alliance Bond Fund, Inc., where the question was whether a federal court was authorized to issue an injunction freezing assets unrelated to the litigation but potentially needed to satisfy a money judgment. Such injunctions, called Mareva injunctions, had only become accepted in the courts of the United Kingdom during the last several decades. Justice Scalia wrote for a 5-4 majority, holding that the federal courts were not authorized to issue such injunctions because they were not an accepted part of equity practice when Congress passed the Judiciary Act of 1789. Resisting Scalia’s push to freeze equity at 1789, Justice Ginsburg’s dissent argued that equity must be eminently flexible “to protect all rights and do justice to all concerned.”

Bray argues that neither approach is workable; the scope of equitable remedies must be more flexible than Scalia’s approach and less amorphous than Ginsburg’s approach to provide guidance to lower courts. Over time the Court has coalesced around a middle path that protects the discretion inherent in equity while cabining its use to exceptional circumstances. For example, the Court was unanimous in eBay v. MercExchange (2006), which established a four-part test for permanent injunctions and which Bray identifies as the “most important decision in decades” on the issue. And the next time Justice Ginsburg dissented in favor of equity’s flexibility, in Winters v. Natural Resources Defense Council (2008), only one other justice joined. Under the new equity cases, the guiding rule is that equitable remedies are “exceptional.” Bray explains that the “norm is legal remedies” and “[a]ny departure demands justification; even if it is easily made, it still must be made.” (P. 1038.) The Court has provided the tools for making that justification in “new old” multipart tests and a repaired history, thereby giving lower courts better guidance than actual equitable practice could offer.

Bray concludes that this approach is broadly consistent with equity’s broad tradition, even if inconsistent with its specific practice over the centuries (which often were inconsistent or conflicting). For example, at one time equity “would never enjoin a trespass,” whereas now an injunction is the definitive remedy for trespass. (P. 1016.) That broad perspective offers the best approach. Plus, an artificial history is also easier to update, providing flexibility to better seek the aspirational principles that are “not just the words but the music” of equity. (P. 1012.) Thus, Bray defends the new equity cases as mostly right and good enough.

I do wish that Bray touched on the relationship between the type and content of an equitable remedy, an issue that is not obvious to those of us who are not remedies experts. For example, Justice Ginsburg’s approach in Grupo Mexicano de Desarrollo extolling equity’s flexibility is not, as Bray argues, a useful guiding principle when distinguishing between types of remedies—like when deciding whether the phrase “equitable remedies” in a particular statute includes injunctions but not writs of mandamus.  But it offers useful guidance to a judge’s ability to control the content of, for example, a preliminary injunction—having no restrictions is not the same has having no guidance. The rub is that the type of injunction seems tied by its content—what is a Mareva injunction if not the familiar preliminary injunction tailored to do a specific thing? If so, Ginsburg’s broad approach may be a workable answer to the practical question of whether the judge can do what she did. Whatever the correct answer, explaining both sides of this coin would better communicate Bray’s argument to a general readership.

On the whole, Bray’s article is a wonderfully written reminder of how instrumental legal reasoning is. Though we labor under various euphemisms, precedent is only as right as it is useful. We forgive advocates of this weakness in recognition of the institutional role that they play in an adversarial system, but we forget that judges also play an institutional role—that of making a decision, and not always the right decision. In the new equity cases, Bray argues that the Court succeeded in performing its institutional function of providing guidance to lower courts, and not untangle the Gordian history of equitable remedies. Ours is a system designed to settle expectations, not exceed them. Consider then-Associate Justice Rehnquist’s frustration that that legal academy “holds [the Court] up to a far higher standard than any group of nine mortals can expect to attain”:

If our opinions seem on occasion to be internally inconsistent, to contain a logical fallacy, or to insufficiently distinguish a prior case, I commend you to the view attributed to Chief Justice Hughes upon his retirement from our Court in 1941. He said that he always tried to write his opinions logically and clearly, but if a Justice whose vote was necessary to make a majority insisted that particular language be put in, in it went, and let the law reviews figure out what it meant.

Tip of the hat then, to Bray for figuring out what it means.

Cite as: Wyatt Sassman, On Being Mostly Right, JOTWELL (March 3, 2016) (reviewing Samuel Bray, The Supreme Court and the New Equity, 68 Vand. L. Rev. 997 (2015)),

Bringing Court Reasoning to the Surface

Elizabeth Y. McCuskey, Submerged Precedent, 16 NEV. L. J. __ (forthcoming 2016), available at SSRN.

In the modern age, there is no shortage of information. The internet and the tools it has inspired lead many—myself included—to feel overwhelmed by the sheer volume of what is out there. As a consequence, I came to Elizabeth McCuskey’s Submerged Precedent with some degree of skepticism. McCuskey, after all, argues that even more information—in the form of “submerged” district court opinions—should be made readily available. After reading this carefully researched and artfully written article, however, I am a believer. And I think you will be too.

First, what is “submerged precedent?” Although district courts do not create vertically or horizontally precedential opinions in the strictest sense, McCuskey argues that district court opinions contribute to how decisional law develops. She adopts a broad view of precedent—reaching any court opinion that provides reasoned arguments—which results in a large body of persuasive law. As McCuskey argues, however, the law can only be persuasive to the extent it is available to the parties, and consequently, to courts. This is where submersion comes into play. The question is which district court opinions are available and where. District court judges designate opinions that they deem to be particularly important as “published.” Those opinions then appear on Westlaw (or other legal databases such as Lexis, but for ease, I will refer to only Westlaw). Unpublished district court opinions may also appear on Westlaw, but only if the authoring judge designates them as “written opinions.” What remains “submerged” are reasoned decisions that do not carry these designations. Instead, they can only be found on databases such as PACER, which has limited search functionality and charges a fee for everything other than “written opinions,” or Bloomberg, which while more searchable, is quite expensive. These opinions constitute the submerged precedent about which McCuskey is concerned.

Second, do we really care about “submerged precedent”? McCuskey argues that we should be concerned. Her argument develops in two parts—one is data-driven and the other is theoretical.

The data argument relies on a dataset McCuskey collected that looks at federal question opinions under Grable & Sons Metal Prod. v. Darue Eng’g & Mfg. The results span a seven-year period of cases in two federal districts. McCuskey compared the rates of remand to state court in “published and unpublished opinions” found on Westlaw with the remand rates in opinions deemed submerged precedent. Of all of the remand decision, about 56% were reasoned decisions. Of those reasoned decisions, 39% were on Westlaw and 17% were submerged. Looking only at the reasoned decisions and comparing Westlaw decisions to submerged ones, McCuskey found notable differences. For example, in ERISA cases, where close to 32% of the reasoned decisions were submerged, the overall remand rate (combining Westlaw and submerged) was 64%, but the Westlaw remand rate was 47% while the submerged remand rate was 100%. In almost every category of cases, the Westlaw remand rate was lower than both the overall and submerged remand rates. In other words, the results in cases that are readily available are skewed.

McCuskey acknowledges a number of limitations to her findings—the small dataset, the limitations created by the substantive law, and the fact that outcome differences really do not matter unless the opinions’ reasoning is also meaningfully different. This latter limitation prevents McCuskey from drawing a strong conclusion from her dataset; her review of the opinions’ reasoning leaves her at something of a draw. Calling for more research of this kind, but perhaps with a different legal question at a different procedural time, McCuskey concludes that her dataset is illuminating, but not conclusive as to whether we should care about submerged precedent.

This leads McCuskey to her theoretical arguments, grounded in concerns for fairness, efficiency, and legitimacy. For litigants to feel fairly treated, McCuskey argues, they must have access to all of the opinions so that those litigants, and the public itself, can see whether courts are consistent. In the interest of efficiency, it is important for judges to have access to the full spectrum of opinions, providing additional templates for handling similar issues as they arise. The availability of more opinions means that the system is transparent and thus legitimate. Moreover, litigants have a stronger sense of having had their day in court when the opinions are widely available. In addition to these systemic values, the opinions have intrinsic value. District courts, for example, are often the only courts to regularly handle weighty issues like discovery disputes. Because those kinds of issues are often shielded from appellate review, the availability of a larger segment of those opinions is meaningful. Finally, district court judges can wield a great deal of power in how the law develops. McCuskey cites District Judge Jack Weinstein of the Eastern District of New York as one who has had a strong impact on complex litigation. Access to more reasoned opinions from district court judges would give them an even greater impact on how the law evolves.

Having established that submerged precedent is important, McCuskey wrestles with how and to what degree to increase the availability of opinions. As a purist, she argues that all of it should be made available because it is the morally right thing to do and also because the E-Government Act of 2002 requires all “written opinions” to be available for free on PACER. Yet she acknowledges that there are drawbacks to this much access. For example, if judges knew that all of their opinions would be available, they might be less inclined to write reasoned opinions, depriving litigants of the satisfaction of seeing their cases thoroughly handled. In addition, the quantity of information might simply be overwhelming. This leads McCuskey to argue in favor of a “some submergence” solution, in which some opinions remain submerged but more opinions overall see the light of day. She then contends with how to create a system that brings the right opinions to the fore. She offers a number of solutions. These include a rule of professional responsibility that requires attorneys to conduct some level of court-docket research; a “publication panel” to decide what to designate as “written opinions,” removing the publication decision from the authoring judge; or a rule requiring judges to include reasoning in their opinions, much as Rule 11 mandates and Rule 56 suggests.

Whatever the method, now that McCuskey has brought submerged precedent to the surface, we cannot ignore its presence. As always, finding the optimal solution is a challenge. But her article takes us a long way toward reaching one.

Cite as: Brooke D. Coleman, Bringing Court Reasoning to the Surface, JOTWELL (February 4, 2016) (reviewing Elizabeth Y. McCuskey, Submerged Precedent, 16 NEV. L. J. __ (forthcoming 2016), available at SSRN),

Can We Talk Money?

One subject that almost never gets attention in major law-review articles is the attorney’s fee. Fees are the underbelly of the law, the bane of theory, the antithesis of high-minded and selfless lawyering, the grubby acknowledgement that lawyers need to eat — and that sometimes they eat very well, indeed. Of course, fees are also what make the legal world go ’round. Among their other effects, fees drive decisions about access to justice: if the lawyer cannot get paid, the lawyer is unlikely to pursue a claim. When a lawyer brings a claim, concerns about fees can affect the lawyer’s decisions about whether and when to settle, and which claims to file or abandon. In particular, the contingency fee is an especially critical component in ensuring both access and law enforcement in a legal system that operates without effective legal aid in civil cases but relies heavily on private enforcement of rights (i.e., the American legal system).

Frank discussion about “the critical role that profit, capital, and risk … play in setting the terms of justice” are, as Tyler Hill points out in his impressive student note, few and far between. The conversation is perhaps most advanced in the field of aggregate litigation. The picture that legal ethicists and law-and-economics scholars often paint is not a pretty one. The divergence between the interests of a group of plaintiffs and the lawyer who represents them can be great. The fear — borne out more by a few anecdotes of near-mythic proportion than by hard empirical evidence — is that lawyers will collude with defendants and sell out the interests of a class in return for a fat fee. Even without collusion, however, the lawyer is usually the largest stakeholder in class-action or other aggregate litigation; to believe that lawyers’ concerns over the collectability and size of their contingency fee have no impact on lawyers’ conduct during litigation is to expect that lawyers possess a level of virtue that even Diogenes would have found admirable.

The attempt to avoid this “agency cost” — this pursuit of the agent’s (the lawyer’s) self-interest over the interest of the principal (the represented group) —has shaped aggregation doctrine. It explains, for instance, requiring that the claims of class representatives and members be common and typical and that there be adequate representation of class members’ claims at all times. It has affected the law surrounding courts’ awards of attorneys’ fees to lawyers who obtain recovery for the class. And it has affected the big-picture storyline about the value of aggregate litigation. The perceived horror of lawyers unhinged from their clients and running amok served, for example, as a foundational premise for the jurisdictional changes in the Class Action Fairness Act, as well as recent Supreme Court decisions reining in the breadth of Rule 23 and barring most class arbitration.

Of course, counteracting this storyline is another one: that class-action and other aggregate litigation performs two critical tasks. The first is to compensate to victims, especially those who would be unable to afford to bring suit on an individual basis because the costs of doing so are so high that they would eat up most or all of an individual recovery. Pooling cases achieves economies of scale that make litigation worthwhile. The second is to ensure adequate deterrence. Without a realistic threat of litigation and with limited regulatory oversight, wrongdoers have an incentive to cheat large numbers of people out of small amounts of money. Aggregating claims creates the necessary threat and evens up the incentives of victims and wrongdoers to invest in the litigation.

Hill’s note starts from this latter story: that class actions perform important compensatory and regulatory functions and should therefore be encouraged. But present fee structures, he points out, limit the capacity of class actions to achieve their promise. Hill’s beginning point, however, is not the usual agency-cost tale. Instead, he shows how the typical fee arrangement (a contingency fee) creates incentives for plaintiffs’ lawyers to select or deselect certain types of class actions. The contingency fee is paid out at the end of the litigation, often after years of struggle. A lawyer contemplating taking on such a case must, therefore, consider not only the size of the ultimate fee and the risk of non-recovery, but also the capital that the lawyer must invest to achieve this fee (i.e., the forsaken hourly fees that hypothetically could have been earned on other legal work) and the cost of that capital (the relevant interest rate).1 Only when the expected fee from class-action litigation exceeds the time-value of the capital that the lawyer invests — in other words, when the lawyer can expect to earn a profit — will the lawyer take on the class’s representation. But at the time that the lawyer must make this decision, many variables are uncertain — not the least of which is how large a fee the court will ultimately award to the lawyer if the class action is successful. As a result, Hill argues, lawyers naturally gravitate to clear winners, which have a more certain chance of fee recovery. This behavior leaves victims with viable but risky cases without legal representation and drives up the benchmark for fees in future cases —consequences that in turn limit the capacity of victims to obtain compensation (and of wrongdoers to be deterred).

Hill’s theoretically elegant solution is to permit lawyers to seek out lenders to invest in the litigation in return for all (or a portion) of the lawyer’s fee. The mechanism for raising this capital is an auction, in which the investor with the lowest bid wins. The winning bidder is responsible for paying the lawyer’s hourly fees and expenses, and then deducts from the proceeds of the class settlement or judgment the amount called for in the bid (including the cost of capital). As an example, Hill describes a case with an expected value of $30 million with recovery expected after two years of litigation. The winning bidder takes a half-interest in the fee, which is estimated to be $4 million. The investor wants a 12% return on the capital to account for the cost of money and the risk of non-recovery. Therefore, the investor would receive $2.5 million at the successful conclusion of the case two years later (the half-fee of $2 million, as increased by two years of compound 12% interest).

Using such a market solution, Hill argues, ensures that lawyers receive the market value of their services and limits the lawyer’s risk to a level that the lawyer finds comfortable. The judge’s task in setting the fee becomes simpler: approving the basic investment arrangement in advance and then checking its fairness (and making necessary adjustments) if a class award results. Most important, lawyers will have an incentive to take on viable-but-risky class litigation, thus advancing compensation and deterrence goals.

This proposal, which Hill spells out in detail, is a cousin of other class-auction proposals, the most famous of which is the proposal by Jonathan Macey and Geoffrey Miller to auction the class’s claims, distribute the proceeds to the class, and allow the winning bidder to pursue the wrongdoer. As Hill points out, these other auction ideas could be used in tandem with his, but his proposal — to auction off just the class counsel’s fee — is unique and stands on its own two feet. Hill defends the proposal against various objections, the most obvious of which is that the investor, as the lawyer’s quartermaster, will now control the class litigation — thus further entrenching the agency-cost problem. As the note points out, however, the agency-cost problem already exists, and substituting the return-hungry investor for the fee-hungry lawyer as the focal point of the problem does nothing to exacerbate it, while solving certain other difficulties. True enough, although turning class counsel into an hourly-fee lawyer creates a new type of agency cost; the self-interested desire of the hourly-fee lawyer to overwork a case is well-known, and it will be costly for the investor to monitor class counsel closely enough to prevent overbilling. The new layer of the investor would also further insulate the lawyer from the interests of the class. And Hill’s proposal also encounters many of the same defects as the courts’ now-defunct experiment with auctioning the position of lead counsel in securities class actions suffered from certain defects, such as variation in bids that made it hard to compare the apples of one bid to the oranges of another.

Whatever its potential flaws, Hill’s note represents another in a series of recent proposals that have crafted creative solutions to overcome some of the seemingly intransigent problems of class and aggregate representation.2 A very few cases have made tentative nods in the direction of these proposals, but the emphasis is on very few.3 Some of these solutions deserve a chance to prove themselves in the marketplace. That, however, requires a more adventurous spirit on the part of judges and lawyers than seems possible in this time of class-action retrenchment. The negative image of the class action — as a device to browbeat upstanding defendants into blackmail settlements that provide no benefit to class members and serve to enrich only the lawyers who bring the action — still holds sway.

Is it possible to change this image? On one point, Hill is surely right. Class actions are sometimes necessary to provide deterrence against broad-based wrongdoing and to deliver a modicum of compensation to those harmed. Crafting a rule that ensures fair compensation for class counsel is a central — perhaps the central — task necessary to deliver on the class action’s promise.4 Until we face this reality and design a fee structure that shapes and aligns the incentives of class counsel with those of the class, the negative stereotype of class actions will prevail.

We have the means to improve class actions and to reduce their negative side effects. And Hill’s note shows that we have the ideas. We need only the will.

  1. Hill includes recoverable expenses, in addition to fees, as part of the value of the capital. For simplicity of description, I omitted consideration of expenses in the text. []
  2. These include proposals in the American Law Institute’s Principles of Aggregate Litigation, and in articles by Luke McCloud & David Rosenberg and by Geoffrey Miller. I have tossed in a few wacky proposals of my own (here and here), one of which Hill kindly addresses in his note. []
  3. See Forsythe v. ESC Fund Mgmt. Co., C.A. No. 1091–VCL, 2013 WL 458373 (Del. Ch. Feb. 6, 2013) (entertaining but ultimately rejecting a proposal from objectors and their third-party financiers to pay the class members the agreed-on (but allegedly inadequate) settlement amount in return for the right to continue the litigation against the defendant).  The classic example, albeit shot down by a unanimous Supreme Court in Wal-Mart Stores, Inc. v. Dukes, is Hilao v. Estate of Marcos, 103 F.3d 767 (9th Cir. 1996) (approving the use of trial by statistics). []
  4. I have always been persuaded that the fee structure proposed many years ago by Kevin Clermont and his student John Currivan came the closest to achieving this goal. They proposed a contingency fee that relies on a combination of an hourly rate and a percentage of the recovery. This would be an ex post award. Hill’s ex ante attempt to set the market rate for attorney compensation also has great merit. Whether the two ideas could be combined is a matter worthy of consideration. []
Cite as: Jay Tidmarsh, Can We Talk Money?, JOTWELL (January 19, 2016) (reviewing Tyler W. Hill, Note, Financing the Class: Strengthening the Class Action Through Third-Party Investment, 125 Yale L.J. 484 (2015)),

Anti-Plaintiff Bias in the New Federal Rules of Civil Procedure

Patricia W. Hatamyar Moore, The Anti-Plaintiff Pending Amendments to the Federal Rules of Civil Procedure and the Pro-Defendant Composition of the Federal Rulemaking Committees, 83 U. Cin. L. Rev. 1083 (2015), available at SSRN.

On December 1, 2015, several major amendments to the Federal Rules of Civil Procedure took effect. Some of these changes might, at first glance, seem dry and technical, such as shortening the time to serve process. Other changes, such as the addition of a so-called “proportionality” standard to the scope of discovery, have been the subject of heated debate in the months since the changes were proposed.

While it might be tempting to dismiss all but the most controversial amendments as nothing more than footnotes in a new casebook, each of these amendments are part and parcel of anti-plaintiff trends in procedural rulemaking. Patricia Moore’s article should be required reading for any professor preparing to teach the new rules, because it combines a clear and practical outline to each of the rule changes with an incisive critique of the substance of the changes and the process by which they were promulgated.

The first part of the article details each amendment, explaining how each rule has changed and the impetus for the revision. Her writing provides more than a glorified “redlining” of the old and new texts. Her analysis includes examples of how the old rules worked in practice, and how the amendments might change the litigation landscape. She concludes that, with only one exception (Rule 34), each amendment exposes clear anti-plaintiff bias and will likely generate anti-plaintiff results. She also cites to the record of committee discussions and testimony that point to some uncomfortable conflicts of interest among committee members and the members of the defendants’ bar urging these changes. Moore is methodical in considering each amendment in turn, but also groups them together in three larger categories that give a sense of the ideological motivations of the rulemakers.

Having documented the amendments, Moore turns to two broad critiques of the process. The first takes aim at the committee’s claim that the amendments are supported by empirical evidence. This was a powerful assertion, as the existence of empirical evidence suggested that the changes were driven by objective data rather than the subjective ideological preferences of committee members. Moore demonstrates that not all data are created equal. The data that peppered the committee deliberations and reports consisted primarily of opinion surveys. In other words, the “empirical” evidence mounted by the committee was little more than an objective representation of essentially subjective viewpoints. Beyond critiquing the committee’s own data, Moore collects data and studies that do not support the committee’s positions–which the committee all but ignored.

The second critique of the rulemaking process focuses on the ideological make-up of the committee and the Duke conference that was the springboard for the current round of changes. She demonstrates that, while plaintiffs’ voices were not completely absent, their position is underrepresented on the committee and was poorly represented at the conference and hearings on the proposed changes. Along with Suja Thomas’s Op-Ed criticizing the Duke conference for allowing corporate interests to more or less dictate the interpretation and implementation of these rules, Moore’s article provides a much-needed rejoinder to any academic or practitioner inclined to view the rules and their authors as boring, technical, and disconnected from ideology.

Moore’s article represents the best of practical academic scholarship. It is an article that one can turn to for the purposes of actually learning something about rules and doctrine, while at the same time providing a theoretical framework for the subject and a normative critique of the rules that it explains. I expect it will be in my catalogue of “go to” articles for a number of years, both for teaching and research purposes.

Cite as: Robin Effron, Anti-Plaintiff Bias in the New Federal Rules of Civil Procedure, JOTWELL (January 5, 2016) (reviewing Patricia W. Hatamyar Moore, The Anti-Plaintiff Pending Amendments to the Federal Rules of Civil Procedure and the Pro-Defendant Composition of the Federal Rulemaking Committees, 83 U. Cin. L. Rev. 1083 (2015), available at SSRN),

A Fresh Look at Qualified Immunity

Aaron Nielson & Christopher J. Walker, The New Qualified Immunity, 87 S. Cal. L. Rev. (forthcoming 2015), available at SSRN.

Qualified immunity—the doctrine that prescribes whether government officials alleged to have committed constitutional violations should be immune from suit—has traveled a winding path. It asks two questions: whether a constitutional violation was actually committed, and whether the constitutional right in question was clearly established at the time of the violation. If the answer to either or both questions is “no,” then the government official is entitled to qualified immunity and the suit against her is dismissed.

Over the past two decades, the question of whether and in what order courts should decide these two questions has preoccupied the Supreme Court. The Court indicated in Wilson v. Layne (1996) that it generally was better for courts to resolve the constitutional merits question first, and then held in Saucier v. Katz (2001) that courts were required to do so. Its reasoning, in both instances, is that courts must articulate constitutional law in order to guide the conduct of government officials in the future. Just eight years later in Pearson v. Callahan, however, the Court shifted course, holding that deciding the constitutional merits question was discretionary, not mandatory.

In The New Qualified Immunity, Aaron Nielson and Chris Walker explore what has actually happened since Pearson by surveying both published and unpublished decisions in the federal appellate and district courts. Their work is a painstaking effort to examine how Pearson is playing out on the ground, and the result is a wealth of important data that provide critical insight into the development of constitutional rights. The article should stand as a seminal contribution to the post-Pearson literature—indeed, to the qualified immunity literature in general. It’s the place where all those interested in evaluating qualified immunity should begin in the future.

Nielson and Walker begin with an admirably detailed survey of the development of qualified immunity doctrine. They also survey the empirical literature on qualified immunity, including my own (now somewhat dated) pre-Pearson contribution.

They then examine how Pearson has affected judicial behavior. Unsurprisingly, courts reach the constitutional merits question less frequently after Pearson—as Nielson and Walker are correct to note, it would be very surprising if they did not. And the rate at which courts decline to decide constitutional question has returned to the roughly pre-Saucier rate of approximately one case in four, compared to less than 6% during the Saucier period. Yet among the cases where courts do reach the constitutional question, Nielson and Walker present an intriguing and, for some of us, troubling finding. They explain: “Courts…appear to find constitutional violations yet grant qualified immunity less frequently now…than they did before Pearson.” (p. 5.)

In other words, courts are now choosing to skip the merits more frequently now, but when they do decide the merits, they are less likely to find a constitutional violation. The first finding is unsurprising; the second is surprising and perhaps troubling. Admittedly, it is difficult to attribute the latter behavior to Saucier with certainty, which the authors appropriately acknowledge. There might be other factors at play. For example, perhaps judges are less likely overall to recognize an expansive view of constitutional rights; thus, fewer cases articulate an expansive view of constitutional rights not as a result of Pearson itself, but as a result of a broader trend among federal judges. I would be interested to hear the authors’ thoughts on alternative explanations for the judicial behavior they have observed. Perhaps a project for future work (by Nielson and Walker or anyone else) might eliminate some of these alternative explanations or determine the degree to which they contribute to the overall trend.

Nielson and Walker provide another interesting contribution to the empirical qualified immunity literature by examining disparities in the way that different circuits apply Pearson. For example, the Fifth Circuit chooses to reach constitutional questions 57.6% of the time, while the Ninth Circuit does so only 37% of the time. Another difference lies in the way that the circuits decide cases when they do choose to reach constitutional merits: the Ninth Circuit finds constitutional violations 16.4% of the time, while the Fifth Circuit does so only 1.3% of the time and the Sixth Circuit only 0.8% of the time. As Nielson and Walker observe, “these circuit-by-circuit disparities may reveal a geographic distortion in the development of constitutional law,” such that “one could reasonably fear that constitutional law may develop quite differently in the various circuits.” (p. 36.) While the numbers are small, the finding is sufficiently interesting—and perhaps sufficiently troubling—to warrant further examination by researchers. (This would be an interesting and feasible project for a student note.)

Many excellent law review articles falter in their prescriptions, but one of the strengths of Nielson and Walker’s work lies in their proposal for what we should do. One problem after Pearson is that the Supreme Court has failed to provide guidance for when courts should decide the constitutional merits. Nielson and Walker offer a neat solution, borrowed from administrative law. They propose: “the Court should require lower courts—both trial and appellate courts—to give reasons for exercising (or not) their Pearson discretion to reach constitutional questions.” (p. 46.) This proposal has a number of merits: it has already been well developed in administrative law; a number of scholars have previously argued that we should incorporate reason-giving into an array of civil procedure contexts; the act of giving reasons for a decision has, in itself, been shown to improve a decision; and it offers guidance to other courts about when a decision may or may not be appropriate. Over time, if the Supreme Court becomes concerned about why lower courts are deciding (or not deciding) constitutional questions, it can elaborate on what are and are not appropriate reasons.

The New Qualified Immunity has considerable significance for the bench and bar. It provides a great example of how legal scholarship simultaneously may be of great use to legal scholars, judges, and practitioners—there is no conflict among the various audiences for such a piece. The article is a wake-up call to judges to examine their own behavior and think about why they are choosing to decide or skip constitutional questions. Particularly in light of inter-circuit disparities, whether to decide the constitutional merits is not a foregone conclusion and judges would do well to consider how their own behavior measures up against national norms. For attorneys, it is useful to know whether one practices in a circuit where judges tend to decide or deflect the merits question, and potentially valuable to be able to call that information to judges’ attention, whether as a call to reach the merits or as a caution to judges against stepping too far out of line with their colleagues.

In short, The New Qualified Immunity is a gift, beautifully packaged, for those of us who write about constitutional litigation. It provides an adept summary of what has come before it. It adds a valuable empirical contribution with enough data to play with for months. And it offers a plausible prescription for improving judicial decisionmaking and, as a direct result of the latter, an improvement to the law itself. If and when the Supreme Court refines Pearson, I look forward to more fine analysis from Nielson and Walker.

Cite as: Nancy Leong, A Fresh Look at Qualified Immunity, JOTWELL (December 3, 2015) (reviewing Aaron Nielson & Christopher J. Walker, The New Qualified Immunity, 87 S. Cal. L. Rev. (forthcoming 2015), available at SSRN),

Personal Jurisdiction Based on Intangible Harm

Alan M. Trammell & Derek E. Bambauer, Personal Jurisdiction and the “Interwebs,” 100 Cornell L. Rev. 1129 (2015).

Conduct channeled through cyberspace can cause harm in physical space. That leakage across a conceptually amorphous border has befuddled courts attempting to adapt personal jurisdiction doctrine to the Internet. At least two distinct problems have combined to produce an inconsistent and unstable jurisprudence. First, the Internet is a buffer between the defendant and the forum. This technological intermediary diffuses the defendant’s geographic reach, complicating analysis of the defendant’s contacts and purpose. Second, activity on the Internet often leads to intangible harm, such as a sullied reputation or devalued trademark. These intangible injuries can manifest in places that are difficult to predict ex ante and to identify ex post.

Accordingly, the Internet creates spatial indeterminacy in a legal context that reifies geographic boundaries. Many courts have reacted by trying to tame complexity with an ostensibly elegant tripartite framework for analyzing jurisdiction. The “Zippo test”—named after an influential yet often-criticized district court decision—posits that jurisdiction based on Internet contacts depends on pigeonholing websites into categories. A “passive” website that merely provides content is a weak basis for jurisdiction, while jurisdiction usually exists over websites that are commercial platforms for repeated transmission of files. Between these extremes are “interactive” sites that require a context-sensitive inquiry into the nature of the interactions.

Alan Trammell and Derek Bambauer’s recent article Personal Jurisdiction and the “Interwebs” eviscerates the Zippo test and similarly stilted efforts to apply personal jurisdiction doctrine to the Internet. Trammell and Bambauer focus on two pathologies that have undermined judicial reactions to suits arising from Internet activity: the tendency of novel technology to “bedazzle[] and bewitch[]” observers, and an emphasis on spatializing virtual conduct rather than addressing the broader problem raised by activity that causes intangible injuries. The result is a jurisdictional inquiry that appears “beautifully simple,” yet is both “superficial” and “indeterminate.”

The article addresses the first pathology by contending that Zippo allows complex technology to obscure the underlying purpose of constraints on personal jurisdiction. The Internet seems unique because it streamlines the transfer of information and facilitates new forms of interaction. Zippo’s tripartite framework reacts to this apparent novelty by fixating on the transmission of files and interaction between Internet users, thus appearing to adapt old doctrine to a new context. But as Trammell and Bambauer explain, the test is superfluous, misleading, and arbitrary. In cases involving extensive commercial activity over the Internet, Zippo is superfluous because prior doctrine addressing “purposeful availment” could adapt to commerce through novel technological means. In cases involving noncommercial activity, Zippo is misleading because it implies that Internet activity is often insufficient to warrant jurisdiction despite the fact that pre-Internet caselaw upheld jurisdiction in many noncommercial disputes. And in both commercial and noncommercial cases, an extensive inquiry into a website’s “interactivity” produces an arbitrary result unmoored from values that animate constitutional limits on state authority.

The Zippo test survives not because it is sensible, but because it provides a “false hope” of rigor for judges seeking to navigate a confusing technological landscape. Indeed, the authors make the interesting observation that the first federal court of appeals to reject Zippo—the Ninth Circuit, “the court with jurisdiction over Silicon Valley”—was likely the circuit with the least anxiety about confronting technological innovations.

The article addresses the second pathology by contending that courts mistakenly focus on aspects of the Internet that are unique rather than traits the Internet shares with other technologies. Courts analyzing jurisdiction in Internet cases devote inordinate effort to considering where conduct occurs. Trammell and Bambauer argue that this is a fruitless exercise because the Internet diffuses activity across geographic borders. Conduct clearly occurs at the location where a person creates content or files disseminated through the Internet, but identifying other locations as salient to jurisdiction seems arbitrary. A natural response to their argument is that one particular location is not arbitrary: the place where an injury occurs. But identifying the place of injury is difficult when the harm is intangible. When the location of injury is intangible, Internet cases are similar to non-Internet cases. For example, regardless of the technology used to defame a person or infringe a trademark, identifying the locus of a person’s reputation or intellectual property requires a theory of how intangible interests map onto physical space. Accordingly, the authors argue that Internet cases are difficult not because the Internet uniquely obscures the location of conduct, but because the Internet is the latest technology to raise the vexing question of where intangible injuries occur.

Having shifted the focus from the location of Internet activity to the location of intangible injuries, Trammell and Bambauer propose a new test. The test relies on what they identify as three “first principles” of personal jurisdiction doctrine: the exercise of state power should not be arbitrary, jurisdiction should be predictable, and the forum should be fair for the defendant. The authors also contend that jurisdictional rules should be “efficient.” From these principles, the authors derive a rule: “Internet-based contacts should rarely, if ever, suffice for personal jurisdiction.” For example, jurisdiction would not exist in the plaintiff’s home state based merely on the local availability of a website infringing a trademark. In contrast, if a seller uses the Internet to facilitate sales of tangible objects to a buyer in the forum, jurisdiction would exist because the physical delivery of goods to the forum would be a relevant contact even if the web-based sales platform is not.

The article makes an important contribution to the literature by pinpointing why the Internet raises difficult personal jurisdiction problems. Courts and commentators have struggled with Internet cases in part because the Internet is often a red herring. When a case involves physical injuries in the forum, the fact that the Internet facilitated the conduct leading to those injuries may be irrelevant because doctrine pre-dating the Internet is available to assess the nexus between conduct and physical harm. In contrast, when a case involves intangible harm, the difficult question is: where did the injury occur? If the harm cannot plausibly be localized, then the fact that the case involves Internet contacts highlights the defendant’s tenuous contact with the forum. In this scenario the use of the Internet does not create a new problem, but rather places an old problem into starker relief. Academic and judicial attention should therefore focus on the older problem by considering how personal jurisdiction doctrine should apply in cases involving intangible harm. That inquiry can, in turn, provide insights that make the newer problem about Internet contacts less confusing.

The authors’ proposals are carefully reasoned, but there is room for debate because the article’s rejection of jurisdiction based on Internet contacts rests on three contestable conclusions. First, the article assumes that localizing intangible harm is difficult. Yet one can imagine arguments that particular types of intangible harms are experienced most acutely in the place where a victim resides or is domiciled, as are many tangible harms. If so, then a distinction between tangible and intangible injuries should not be the basis for a blanket rule deemphasizing Internet-based contacts. Second, if an intangible harm can be localized, then jurisdiction would often be appropriate under precedent considering whether the defendant “aimed” at and caused “effects” in the forum. The authors briefly recommend overruling the effects test, which partially rests their critique of Internet-based jurisdiction on the viability of a broader critique of modern personal jurisdiction jurisprudence.

Finally, some theories of personal jurisdiction (including mine) do not emphasize predictability and efficiency as heavily as this article does, instead placing greater weight on the forum state’s interest in facilitating local adjudication. For example, the article suggests that if a hacker intentionally copies private data from a server in the forum, jurisdiction would not be appropriate because hackers are often “indifferent” to or may not know the server’s location. However, an alternative theory would posit that if a person intentionally hacks into servers without knowing or caring where they are located, he assumes the risk of being sued in the state where injury occurs. (For a non-Internet version of the assumption of risk scenario, imagine that the owner of a small pharmaceutical company sneaks into a competitor’s plant and intentionally adds poison to a bottle of cough syrup with the intent of killing a consumer, but without knowing or caring where the bottle will be sold. Should the poisoner’s geographic indifference immunize him from jurisdiction in the state where the victim purchases and consumes the poison?)

Trammell and Bambauer have developed a thoughtful critique of how current personal jurisdiction doctrine addresses the Internet. Further scholarship will benefit from their distinction between the location of Internet activity and the location of its intangible consequences.

Cite as: Allan Erbsen, Personal Jurisdiction Based on Intangible Harm, JOTWELL (November 16, 2015) (reviewing Alan M. Trammell & Derek E. Bambauer, Personal Jurisdiction and the “Interwebs,” 100 Cornell L. Rev. 1129 (2015)),

Making Sense of Plurality Decisions

Ryan C. Williams, Questioning Marks: Plurality Decisions and Precedential Constraint (forthcoming).

In Questioning Marks, Ryan Williams tackles a piece of Supreme Court doctrine that many dismiss with the back of their hand: how to make precedential sense of the Court’s plurality opinions. Oh sure, we all begin with the statement in Marks v. United States that lower courts should ascribe precedential weight to the “holding” of the case, understood as “that position taken by those Members who concurred in the judgments on the narrowest grounds.” But that formulation obscures any number of difficulties. How does a lower court identify the narrowest grounds of the shared decision that produced a judgment that was supported by separate reasons that failed to offer clear guidance in future cases?

Williams first shows that lower courts have taken a range of different approaches to the problem of identifying the narrowest grounds. Some look for an implicit consensus among the five (or more) concurring Justices, others give pride of place to the notion that the Justice casting the fifth vote must have played a decisive role in the outcome and so treat the opinion accompanying that swing vote as controlling. Still others adopt an issue-by-issue approach, looking for the alignment of Justices who expressed agreement with a particular proposition that may be relevant in future litigation. Somewhat controversially, this issue-by-issue approach may also consider the views of dissenting Justices, a group seemingly omitted from the Marks reference to the members concurring in the judgment.

The way lower courts approach these matters may reflect their conception of the nature of a hierarchical judiciary and of their obligations as lower courts. For courts inclined to predict outcomes at the Supreme Court, a tendency to emphasize the fifth vote seems natural–that was the vote needed to nail down the judgment. Others with a bent towards prediction happily consider dissenting views, knowing as they do that the dissenters will likely weigh in on any future question along the lines they have articulated in earlier opinions.

But both approaches can produce real anomalies. Williams tells one dispiriting tale of the lower court reaction to Shady Grove Orthopedic Assocs. v. Allstate Insurance Co. There, as our gentle readers will recall, the Court divided on whether to apply Federal Rule of Civil Procedure 23 (and to displace the New York state prohibition on the aggregation of certain claims) or to defer to state law in a diversity case. A four-Justice plurality, led by Justice Scalia, held that Rule 23 applied and was valid under the Rules Enabling Test articulated in Sibbach v. Wilson & Co. A four-Justice dissent would have viewed Rule 23 as inapplicable, deferring to state law for a complex set of reasons reminiscent of those offered in Gasperini v. Center for Humanities. Justice Stevens cast the fifth-and-deciding vote, agreeing with Justice Scalia in part but arguing that Sibbach was misunderstood to uphold all “arguably procedural” rules. Only Justice Stevens gave voice to his limited conception of Sibbach. We do not know how widely shared his views were; we only know that he was alone in expressing them.

Yet the lower courts have seemingly given effect to Justice Stevens’ opinion on the theory that his was the fifth and deciding vote. This seems particularly wrongheaded, at least as to Justice Stevens’ views about Sibbach. While he may be right, he certainly did not speak for five Justices on that subject. So it is a bit dismaying to learn that his views have taken hold. Even more troubling, according to Williams, lower court decisions do not explore the issues, opting instead for a rather wooden invocation of the Stevens view as controlling by virtue of being the fifth vote.

Williams would solve the Marks problem by calling for a “shared agreement” approach, in which lower courts give precedential effect only to those matters on which a five-Justice majority reached a shared agreement. That approach might, for example, justify the lower courts in extending the Court’s fractured decision that citizens of the District of Columbia are properly regarded as citizens of a state for diversity purposes. While no rationale gained a majority, five Justices did agree on a result that might well apply to citizens of other territories (such as Puerto Rico), as the lower courts later held. But it certainly would not give effect to Justice Stevens’ lone view in Shady Grove.

I found much to like in the paper: a strong command of the cases, a rich theoretical framework in which to evaluate the issues at hand, and a calm and authoritative authorial voice that lets the reader know she is in good hands. I was especially pleased that Williams chose to tackle the problem because it seems most unlikely that the Supreme Court will provide further guidance. The Justices seem far more likely to address a particular lower court disagreement than to nail down a methodological approach to past plurality opinions that might ramify far beyond the particular case, unsettling some bodies of law and producing outcomes that current Justices can neither predict nor endorse. A tip of the hat to Williams for providing a solution that commends itself to courts and theoreticians alike.

Cite as: James E. Pfander, Making Sense of Plurality Decisions, JOTWELL (November 2, 2015) (reviewing Ryan C. Williams, Questioning Marks: Plurality Decisions and Precedential Constraint (forthcoming)),

Class Action Mismatch: Securities Class Action Jurisprudence and High-Frequency Trading Manipulation

Tara E. Levens, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, 82 U. Chi. L. Rev. 1511 (2015).

For faculty members with retirement savings in TIAA-CREF or brokerage accounts, market events of summer 2015 might prompt the conclusion that August is the cruelest month of all. Along with millions of other small investors, academics throughout the United States could only watch helplessly as volatile markets took shareholders on a daily roller-coaster ride resulting in devalued accounts.

In the wake of the 2008 market crash, small investors have become increasingly educated about the structural and institutional drivers of extreme market volatility: automatic, computerized trading techniques over which the small, individual stakeholder has little knowledge or control. Most prominent among these market innovations has been the advent of computerized, high-frequency trading (HFT), driven by mathematical algorithms.

In her thoughtful and innovative comment, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, Tara E. Levens explores the interesting question whether the prevalence of HFT techniques resulting in massive financial losses to small-stake investors will open the door to new securities class actions. Her general conclusion is that current legal theories undergirding various types of securities law violations are mismatched with the harms induced by HFT. Consequently, Levens attempts to formulate a jurisprudence for new securities class actions based on the unique injuries resulting from HFT manipulation. In essence, Levens’ task is a riff on the theme of fitting new wine into old bottles.

Levens first describes the types of investor harms addressed under current securities laws, most notably liability for fraudulent misrepresentation under § 10(b) and Rule 10b-5 of the Securities and Exchange Act of 1934. She suggests that the harms induced by HFT are a poor fit for conventional securities fraud claims. Instead, she pivots to theories of open-market manipulation, which she believes better capture the factual basis for seeking relief.

She notes that plaintiffs may bring claims of open-market manipulation under § 10(b), although “such claims have received ‘curiously little attention’ from plaintiffs, prosecutors, and the courts.” (Pp. 1514–15.) She further suggests that plaintiffs might bring claims of open-market manipulation under § 9 of the Act, but such actions require a showing of specific intent. Because of the difficulty in pursuing relief under § 9, Levens indicates that plaintiffs and prosecutors rarely rely on this provision when bringing manipulation proceedings.

To provide context for her recommendations, Levens analyzes developments in securities class litigation, focusing on the Supreme Court’s elaboration of the fraud-on-the-market presumption that relieves plaintiffs of the necessity to show individual reliance in fraud cases. She suggests that the Court’s 2014 Halliburton decision changed the landscape of securities-fraud class litigation by enhancing the role of expert witness “impact studies” used to demonstrate the effect of an alleged fraud or misrepresentation on a stock’s price, which may determine whether the fraud-on-the-market presumption applies. However, she refrains from concluding whether the increased use of impact studies will benefit either plaintiffs or defendants, or result in more or fewer class certification approvals.

Against this doctrinal backdrop, Levens discusses in great technical detail what constitutes high frequency trading, a subset of algorithmic trading. She explains two types of HFT: market-making activities and more aggressive strategies such as statistical arbitrage. Levens’ article provides an intelligible, accessible account of HFT for less-knowledgeable readers. She concludes by surveying the heated debate over the effects of high frequency trading on market efficiency.

Levens highlights exactly how novel the problem of HFT is on the legal landscape. She notes that the SEC has yet to promulgate formal rules or regulations relating to HFT. According to Levens, the SEC increased its enforcement efforts after the Flash Crash of May 2010, but studies are inconclusive whether HFT or other factors triggered that market collapse. The SEC brought its first market manipulation case against an HFT firm only in October 2014. That action was pursued under Rule 10b-5 and the alleged perpetrator agreed to pay a fine and to cease and desist from further violations of the securities laws.

Levens believes that the spread of HFT and consequent market collapses has set the stage for a resurgence of the open-market manipulation theory. She suggests that plaintiffs who wish to bring claims against HFT firms might succeed by combining various theories of open-market manipulation with the fraud-on-the-market presumption; this hybrid strategy allows plaintiffs to avoid the more stringent intent requirements of § 9, while also availing themselves of the liberal fraud-on-the-market presumption to avoid potentially difficult reliance issues. Levens notes that the fraud-on-the market presumption generally has not been available to plaintiffs alleging market manipulation claims, but she contends that in some situations courts have held otherwise.

Finally, Levens addresses whether high-frequency traders ought to have a private right of action to redress their own injuries, something no commentator has addressed. While noting that traders do not represent the most sympathetic group of claimants, she indicates that traders also may suffer losses from HFT. Analyzing this problem, Levens concludes that HFT traders most likely will have a very difficult time satisfying the requirements for certifying a class action under Fed. R. Civ. P. 23, showing loss causation, or proving reliance.

As Levens correctly points out, HFT issues are likely to continue to surface in litigation, presenting litigants and courts with an array of novel legal problems. She concludes that “regardless of whether high-frequency traders come to court as plaintiffs or defendants, the advent of HFT marks a changed circumstance that the securities-litigation bar will have to wrestle with in the near future.” (P. 1557.)

Levens, the incoming Editor-in-Chief of the University of Chicago Law Review, has produced an impressively sophisticated piece. She has identified a set of emerging legal issues and grappled with existing doctrine as applied to new problems. Even if her hybrid approach proves unsound, she is to be commended for undertaking such an ambitious, challenging topic and, in the best tradition of young scholarship, thinking outside the box.

Cite as: Linda Mullenix, Class Action Mismatch: Securities Class Action Jurisprudence and High-Frequency Trading Manipulation, JOTWELL (October 19, 2015) (reviewing Tara E. Levens, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, 82 U. Chi. L. Rev. 1511 (2015)),