Can We Talk Money?

One subject that almost never gets attention in major law-review articles is the attorney’s fee. Fees are the underbelly of the law, the bane of theory, the antithesis of high-minded and selfless lawyering, the grubby acknowledgement that lawyers need to eat — and that sometimes they eat very well, indeed. Of course, fees are also what make the legal world go ’round. Among their other effects, fees drive decisions about access to justice: if the lawyer cannot get paid, the lawyer is unlikely to pursue a claim. When a lawyer brings a claim, concerns about fees can affect the lawyer’s decisions about whether and when to settle, and which claims to file or abandon. In particular, the contingency fee is an especially critical component in ensuring both access and law enforcement in a legal system that operates without effective legal aid in civil cases but relies heavily on private enforcement of rights (i.e., the American legal system).

Frank discussion about “the critical role that profit, capital, and risk … play in setting the terms of justice” are, as Tyler Hill points out in his impressive student note, few and far between. The conversation is perhaps most advanced in the field of aggregate litigation. The picture that legal ethicists and law-and-economics scholars often paint is not a pretty one. The divergence between the interests of a group of plaintiffs and the lawyer who represents them can be great. The fear — borne out more by a few anecdotes of near-mythic proportion than by hard empirical evidence — is that lawyers will collude with defendants and sell out the interests of a class in return for a fat fee. Even without collusion, however, the lawyer is usually the largest stakeholder in class-action or other aggregate litigation; to believe that lawyers’ concerns over the collectability and size of their contingency fee have no impact on lawyers’ conduct during litigation is to expect that lawyers possess a level of virtue that even Diogenes would have found admirable.

The attempt to avoid this “agency cost” — this pursuit of the agent’s (the lawyer’s) self-interest over the interest of the principal (the represented group) —has shaped aggregation doctrine. It explains, for instance, requiring that the claims of class representatives and members be common and typical and that there be adequate representation of class members’ claims at all times. It has affected the law surrounding courts’ awards of attorneys’ fees to lawyers who obtain recovery for the class. And it has affected the big-picture storyline about the value of aggregate litigation. The perceived horror of lawyers unhinged from their clients and running amok served, for example, as a foundational premise for the jurisdictional changes in the Class Action Fairness Act, as well as recent Supreme Court decisions reining in the breadth of Rule 23 and barring most class arbitration.

Of course, counteracting this storyline is another one: that class-action and other aggregate litigation performs two critical tasks. The first is to compensate to victims, especially those who would be unable to afford to bring suit on an individual basis because the costs of doing so are so high that they would eat up most or all of an individual recovery. Pooling cases achieves economies of scale that make litigation worthwhile. The second is to ensure adequate deterrence. Without a realistic threat of litigation and with limited regulatory oversight, wrongdoers have an incentive to cheat large numbers of people out of small amounts of money. Aggregating claims creates the necessary threat and evens up the incentives of victims and wrongdoers to invest in the litigation.

Hill’s note starts from this latter story: that class actions perform important compensatory and regulatory functions and should therefore be encouraged. But present fee structures, he points out, limit the capacity of class actions to achieve their promise. Hill’s beginning point, however, is not the usual agency-cost tale. Instead, he shows how the typical fee arrangement (a contingency fee) creates incentives for plaintiffs’ lawyers to select or deselect certain types of class actions. The contingency fee is paid out at the end of the litigation, often after years of struggle. A lawyer contemplating taking on such a case must, therefore, consider not only the size of the ultimate fee and the risk of non-recovery, but also the capital that the lawyer must invest to achieve this fee (i.e., the forsaken hourly fees that hypothetically could have been earned on other legal work) and the cost of that capital (the relevant interest rate).1 Only when the expected fee from class-action litigation exceeds the time-value of the capital that the lawyer invests — in other words, when the lawyer can expect to earn a profit — will the lawyer take on the class’s representation. But at the time that the lawyer must make this decision, many variables are uncertain — not the least of which is how large a fee the court will ultimately award to the lawyer if the class action is successful. As a result, Hill argues, lawyers naturally gravitate to clear winners, which have a more certain chance of fee recovery. This behavior leaves victims with viable but risky cases without legal representation and drives up the benchmark for fees in future cases —consequences that in turn limit the capacity of victims to obtain compensation (and of wrongdoers to be deterred).

Hill’s theoretically elegant solution is to permit lawyers to seek out lenders to invest in the litigation in return for all (or a portion) of the lawyer’s fee. The mechanism for raising this capital is an auction, in which the investor with the lowest bid wins. The winning bidder is responsible for paying the lawyer’s hourly fees and expenses, and then deducts from the proceeds of the class settlement or judgment the amount called for in the bid (including the cost of capital). As an example, Hill describes a case with an expected value of $30 million with recovery expected after two years of litigation. The winning bidder takes a half-interest in the fee, which is estimated to be $4 million. The investor wants a 12% return on the capital to account for the cost of money and the risk of non-recovery. Therefore, the investor would receive $2.5 million at the successful conclusion of the case two years later (the half-fee of $2 million, as increased by two years of compound 12% interest).

Using such a market solution, Hill argues, ensures that lawyers receive the market value of their services and limits the lawyer’s risk to a level that the lawyer finds comfortable. The judge’s task in setting the fee becomes simpler: approving the basic investment arrangement in advance and then checking its fairness (and making necessary adjustments) if a class award results. Most important, lawyers will have an incentive to take on viable-but-risky class litigation, thus advancing compensation and deterrence goals.

This proposal, which Hill spells out in detail, is a cousin of other class-auction proposals, the most famous of which is the proposal by Jonathan Macey and Geoffrey Miller to auction the class’s claims, distribute the proceeds to the class, and allow the winning bidder to pursue the wrongdoer. As Hill points out, these other auction ideas could be used in tandem with his, but his proposal — to auction off just the class counsel’s fee — is unique and stands on its own two feet. Hill defends the proposal against various objections, the most obvious of which is that the investor, as the lawyer’s quartermaster, will now control the class litigation — thus further entrenching the agency-cost problem. As the note points out, however, the agency-cost problem already exists, and substituting the return-hungry investor for the fee-hungry lawyer as the focal point of the problem does nothing to exacerbate it, while solving certain other difficulties. True enough, although turning class counsel into an hourly-fee lawyer creates a new type of agency cost; the self-interested desire of the hourly-fee lawyer to overwork a case is well-known, and it will be costly for the investor to monitor class counsel closely enough to prevent overbilling. The new layer of the investor would also further insulate the lawyer from the interests of the class. And Hill’s proposal also encounters many of the same defects as the courts’ now-defunct experiment with auctioning the position of lead counsel in securities class actions suffered from certain defects, such as variation in bids that made it hard to compare the apples of one bid to the oranges of another.

Whatever its potential flaws, Hill’s note represents another in a series of recent proposals that have crafted creative solutions to overcome some of the seemingly intransigent problems of class and aggregate representation.2 A very few cases have made tentative nods in the direction of these proposals, but the emphasis is on very few.3 Some of these solutions deserve a chance to prove themselves in the marketplace. That, however, requires a more adventurous spirit on the part of judges and lawyers than seems possible in this time of class-action retrenchment. The negative image of the class action — as a device to browbeat upstanding defendants into blackmail settlements that provide no benefit to class members and serve to enrich only the lawyers who bring the action — still holds sway.

Is it possible to change this image? On one point, Hill is surely right. Class actions are sometimes necessary to provide deterrence against broad-based wrongdoing and to deliver a modicum of compensation to those harmed. Crafting a rule that ensures fair compensation for class counsel is a central — perhaps the central — task necessary to deliver on the class action’s promise.4 Until we face this reality and design a fee structure that shapes and aligns the incentives of class counsel with those of the class, the negative stereotype of class actions will prevail.

We have the means to improve class actions and to reduce their negative side effects. And Hill’s note shows that we have the ideas. We need only the will.



  1. Hill includes recoverable expenses, in addition to fees, as part of the value of the capital. For simplicity of description, I omitted consideration of expenses in the text. []
  2. These include proposals in the American Law Institute’s Principles of Aggregate Litigation, and in articles by Luke McCloud & David Rosenberg and by Geoffrey Miller. I have tossed in a few wacky proposals of my own (here and here), one of which Hill kindly addresses in his note. []
  3. See Forsythe v. ESC Fund Mgmt. Co., C.A. No. 1091–VCL, 2013 WL 458373 (Del. Ch. Feb. 6, 2013) (entertaining but ultimately rejecting a proposal from objectors and their third-party financiers to pay the class members the agreed-on (but allegedly inadequate) settlement amount in return for the right to continue the litigation against the defendant).  The classic example, albeit shot down by a unanimous Supreme Court in Wal-Mart Stores, Inc. v. Dukes, is Hilao v. Estate of Marcos, 103 F.3d 767 (9th Cir. 1996) (approving the use of trial by statistics). []
  4. I have always been persuaded that the fee structure proposed many years ago by Kevin Clermont and his student John Currivan came the closest to achieving this goal. They proposed a contingency fee that relies on a combination of an hourly rate and a percentage of the recovery. This would be an ex post award. Hill’s ex ante attempt to set the market rate for attorney compensation also has great merit. Whether the two ideas could be combined is a matter worthy of consideration. []
Cite as: Jay Tidmarsh, Can We Talk Money?, JOTWELL (January 19, 2016) (reviewing Tyler W. Hill, Note, Financing the Class: Strengthening the Class Action Through Third-Party Investment, 125 Yale L.J. 484 (2015)), http://courtslaw.jotwell.com/can-we-talk-money/.
 
 

Anti-Plaintiff Bias in the New Federal Rules of Civil Procedure

Patricia W. Hatamyar Moore, The Anti-Plaintiff Pending Amendments to the Federal Rules of Civil Procedure and the Pro-Defendant Composition of the Federal Rulemaking Committees, 83 U. Cin. L. Rev. 1083 (2015), available at SSRN.

On December 1, 2015, several major amendments to the Federal Rules of Civil Procedure took effect. Some of these changes might, at first glance, seem dry and technical, such as shortening the time to serve process. Other changes, such as the addition of a so-called “proportionality” standard to the scope of discovery, have been the subject of heated debate in the months since the changes were proposed.

While it might be tempting to dismiss all but the most controversial amendments as nothing more than footnotes in a new casebook, each of these amendments are part and parcel of anti-plaintiff trends in procedural rulemaking. Patricia Moore’s article should be required reading for any professor preparing to teach the new rules, because it combines a clear and practical outline to each of the rule changes with an incisive critique of the substance of the changes and the process by which they were promulgated.

The first part of the article details each amendment, explaining how each rule has changed and the impetus for the revision. Her writing provides more than a glorified “redlining” of the old and new texts. Her analysis includes examples of how the old rules worked in practice, and how the amendments might change the litigation landscape. She concludes that, with only one exception (Rule 34), each amendment exposes clear anti-plaintiff bias and will likely generate anti-plaintiff results. She also cites to the record of committee discussions and testimony that point to some uncomfortable conflicts of interest among committee members and the members of the defendants’ bar urging these changes. Moore is methodical in considering each amendment in turn, but also groups them together in three larger categories that give a sense of the ideological motivations of the rulemakers.

Having documented the amendments, Moore turns to two broad critiques of the process. The first takes aim at the committee’s claim that the amendments are supported by empirical evidence. This was a powerful assertion, as the existence of empirical evidence suggested that the changes were driven by objective data rather than the subjective ideological preferences of committee members. Moore demonstrates that not all data are created equal. The data that peppered the committee deliberations and reports consisted primarily of opinion surveys. In other words, the “empirical” evidence mounted by the committee was little more than an objective representation of essentially subjective viewpoints. Beyond critiquing the committee’s own data, Moore collects data and studies that do not support the committee’s positions–which the committee all but ignored.

The second critique of the rulemaking process focuses on the ideological make-up of the committee and the Duke conference that was the springboard for the current round of changes. She demonstrates that, while plaintiffs’ voices were not completely absent, their position is underrepresented on the committee and was poorly represented at the conference and hearings on the proposed changes. Along with Suja Thomas’s Op-Ed criticizing the Duke conference for allowing corporate interests to more or less dictate the interpretation and implementation of these rules, Moore’s article provides a much-needed rejoinder to any academic or practitioner inclined to view the rules and their authors as boring, technical, and disconnected from ideology.

Moore’s article represents the best of practical academic scholarship. It is an article that one can turn to for the purposes of actually learning something about rules and doctrine, while at the same time providing a theoretical framework for the subject and a normative critique of the rules that it explains. I expect it will be in my catalogue of “go to” articles for a number of years, both for teaching and research purposes.

Cite as: Robin Effron, Anti-Plaintiff Bias in the New Federal Rules of Civil Procedure, JOTWELL (January 5, 2016) (reviewing Patricia W. Hatamyar Moore, The Anti-Plaintiff Pending Amendments to the Federal Rules of Civil Procedure and the Pro-Defendant Composition of the Federal Rulemaking Committees, 83 U. Cin. L. Rev. 1083 (2015), available at SSRN), http://courtslaw.jotwell.com/anti-plaintiff-bias-in-the-new-federal-rules-of-civil-procedure/.
 
 

A Fresh Look at Qualified Immunity

Aaron Nielson & Christopher J. Walker, The New Qualified Immunity, 87 S. Cal. L. Rev. (forthcoming 2015), available at SSRN.

Qualified immunity—the doctrine that prescribes whether government officials alleged to have committed constitutional violations should be immune from suit—has traveled a winding path. It asks two questions: whether a constitutional violation was actually committed, and whether the constitutional right in question was clearly established at the time of the violation. If the answer to either or both questions is “no,” then the government official is entitled to qualified immunity and the suit against her is dismissed.

Over the past two decades, the question of whether and in what order courts should decide these two questions has preoccupied the Supreme Court. The Court indicated in Wilson v. Layne (1996) that it generally was better for courts to resolve the constitutional merits question first, and then held in Saucier v. Katz (2001) that courts were required to do so. Its reasoning, in both instances, is that courts must articulate constitutional law in order to guide the conduct of government officials in the future. Just eight years later in Pearson v. Callahan, however, the Court shifted course, holding that deciding the constitutional merits question was discretionary, not mandatory.

In The New Qualified Immunity, Aaron Nielson and Chris Walker explore what has actually happened since Pearson by surveying both published and unpublished decisions in the federal appellate and district courts. Their work is a painstaking effort to examine how Pearson is playing out on the ground, and the result is a wealth of important data that provide critical insight into the development of constitutional rights. The article should stand as a seminal contribution to the post-Pearson literature—indeed, to the qualified immunity literature in general. It’s the place where all those interested in evaluating qualified immunity should begin in the future.

Nielson and Walker begin with an admirably detailed survey of the development of qualified immunity doctrine. They also survey the empirical literature on qualified immunity, including my own (now somewhat dated) pre-Pearson contribution.

They then examine how Pearson has affected judicial behavior. Unsurprisingly, courts reach the constitutional merits question less frequently after Pearson—as Nielson and Walker are correct to note, it would be very surprising if they did not. And the rate at which courts decline to decide constitutional question has returned to the roughly pre-Saucier rate of approximately one case in four, compared to less than 6% during the Saucier period. Yet among the cases where courts do reach the constitutional question, Nielson and Walker present an intriguing and, for some of us, troubling finding. They explain: “Courts…appear to find constitutional violations yet grant qualified immunity less frequently now…than they did before Pearson.” (p. 5.)

In other words, courts are now choosing to skip the merits more frequently now, but when they do decide the merits, they are less likely to find a constitutional violation. The first finding is unsurprising; the second is surprising and perhaps troubling. Admittedly, it is difficult to attribute the latter behavior to Saucier with certainty, which the authors appropriately acknowledge. There might be other factors at play. For example, perhaps judges are less likely overall to recognize an expansive view of constitutional rights; thus, fewer cases articulate an expansive view of constitutional rights not as a result of Pearson itself, but as a result of a broader trend among federal judges. I would be interested to hear the authors’ thoughts on alternative explanations for the judicial behavior they have observed. Perhaps a project for future work (by Nielson and Walker or anyone else) might eliminate some of these alternative explanations or determine the degree to which they contribute to the overall trend.

Nielson and Walker provide another interesting contribution to the empirical qualified immunity literature by examining disparities in the way that different circuits apply Pearson. For example, the Fifth Circuit chooses to reach constitutional questions 57.6% of the time, while the Ninth Circuit does so only 37% of the time. Another difference lies in the way that the circuits decide cases when they do choose to reach constitutional merits: the Ninth Circuit finds constitutional violations 16.4% of the time, while the Fifth Circuit does so only 1.3% of the time and the Sixth Circuit only 0.8% of the time. As Nielson and Walker observe, “these circuit-by-circuit disparities may reveal a geographic distortion in the development of constitutional law,” such that “one could reasonably fear that constitutional law may develop quite differently in the various circuits.” (p. 36.) While the numbers are small, the finding is sufficiently interesting—and perhaps sufficiently troubling—to warrant further examination by researchers. (This would be an interesting and feasible project for a student note.)

Many excellent law review articles falter in their prescriptions, but one of the strengths of Nielson and Walker’s work lies in their proposal for what we should do. One problem after Pearson is that the Supreme Court has failed to provide guidance for when courts should decide the constitutional merits. Nielson and Walker offer a neat solution, borrowed from administrative law. They propose: “the Court should require lower courts—both trial and appellate courts—to give reasons for exercising (or not) their Pearson discretion to reach constitutional questions.” (p. 46.) This proposal has a number of merits: it has already been well developed in administrative law; a number of scholars have previously argued that we should incorporate reason-giving into an array of civil procedure contexts; the act of giving reasons for a decision has, in itself, been shown to improve a decision; and it offers guidance to other courts about when a decision may or may not be appropriate. Over time, if the Supreme Court becomes concerned about why lower courts are deciding (or not deciding) constitutional questions, it can elaborate on what are and are not appropriate reasons.

The New Qualified Immunity has considerable significance for the bench and bar. It provides a great example of how legal scholarship simultaneously may be of great use to legal scholars, judges, and practitioners—there is no conflict among the various audiences for such a piece. The article is a wake-up call to judges to examine their own behavior and think about why they are choosing to decide or skip constitutional questions. Particularly in light of inter-circuit disparities, whether to decide the constitutional merits is not a foregone conclusion and judges would do well to consider how their own behavior measures up against national norms. For attorneys, it is useful to know whether one practices in a circuit where judges tend to decide or deflect the merits question, and potentially valuable to be able to call that information to judges’ attention, whether as a call to reach the merits or as a caution to judges against stepping too far out of line with their colleagues.

In short, The New Qualified Immunity is a gift, beautifully packaged, for those of us who write about constitutional litigation. It provides an adept summary of what has come before it. It adds a valuable empirical contribution with enough data to play with for months. And it offers a plausible prescription for improving judicial decisionmaking and, as a direct result of the latter, an improvement to the law itself. If and when the Supreme Court refines Pearson, I look forward to more fine analysis from Nielson and Walker.

Cite as: Nancy Leong, A Fresh Look at Qualified Immunity, JOTWELL (December 3, 2015) (reviewing Aaron Nielson & Christopher J. Walker, The New Qualified Immunity, 87 S. Cal. L. Rev. (forthcoming 2015), available at SSRN), http://courtslaw.jotwell.com/a-fresh-look-at-qualified-immunity/.
 
 

Personal Jurisdiction Based on Intangible Harm

Alan M. Trammell & Derek E. Bambauer, Personal Jurisdiction and the “Interwebs,” 100 Cornell L. Rev. 1129 (2015).

Conduct channeled through cyberspace can cause harm in physical space. That leakage across a conceptually amorphous border has befuddled courts attempting to adapt personal jurisdiction doctrine to the Internet. At least two distinct problems have combined to produce an inconsistent and unstable jurisprudence. First, the Internet is a buffer between the defendant and the forum. This technological intermediary diffuses the defendant’s geographic reach, complicating analysis of the defendant’s contacts and purpose. Second, activity on the Internet often leads to intangible harm, such as a sullied reputation or devalued trademark. These intangible injuries can manifest in places that are difficult to predict ex ante and to identify ex post.

Accordingly, the Internet creates spatial indeterminacy in a legal context that reifies geographic boundaries. Many courts have reacted by trying to tame complexity with an ostensibly elegant tripartite framework for analyzing jurisdiction. The “Zippo test”—named after an influential yet often-criticized district court decision—posits that jurisdiction based on Internet contacts depends on pigeonholing websites into categories. A “passive” website that merely provides content is a weak basis for jurisdiction, while jurisdiction usually exists over websites that are commercial platforms for repeated transmission of files. Between these extremes are “interactive” sites that require a context-sensitive inquiry into the nature of the interactions.

Alan Trammell and Derek Bambauer’s recent article Personal Jurisdiction and the “Interwebs” eviscerates the Zippo test and similarly stilted efforts to apply personal jurisdiction doctrine to the Internet. Trammell and Bambauer focus on two pathologies that have undermined judicial reactions to suits arising from Internet activity: the tendency of novel technology to “bedazzle[] and bewitch[]” observers, and an emphasis on spatializing virtual conduct rather than addressing the broader problem raised by activity that causes intangible injuries. The result is a jurisdictional inquiry that appears “beautifully simple,” yet is both “superficial” and “indeterminate.”

The article addresses the first pathology by contending that Zippo allows complex technology to obscure the underlying purpose of constraints on personal jurisdiction. The Internet seems unique because it streamlines the transfer of information and facilitates new forms of interaction. Zippo’s tripartite framework reacts to this apparent novelty by fixating on the transmission of files and interaction between Internet users, thus appearing to adapt old doctrine to a new context. But as Trammell and Bambauer explain, the test is superfluous, misleading, and arbitrary. In cases involving extensive commercial activity over the Internet, Zippo is superfluous because prior doctrine addressing “purposeful availment” could adapt to commerce through novel technological means. In cases involving noncommercial activity, Zippo is misleading because it implies that Internet activity is often insufficient to warrant jurisdiction despite the fact that pre-Internet caselaw upheld jurisdiction in many noncommercial disputes. And in both commercial and noncommercial cases, an extensive inquiry into a website’s “interactivity” produces an arbitrary result unmoored from values that animate constitutional limits on state authority.

The Zippo test survives not because it is sensible, but because it provides a “false hope” of rigor for judges seeking to navigate a confusing technological landscape. Indeed, the authors make the interesting observation that the first federal court of appeals to reject Zippo—the Ninth Circuit, “the court with jurisdiction over Silicon Valley”—was likely the circuit with the least anxiety about confronting technological innovations.

The article addresses the second pathology by contending that courts mistakenly focus on aspects of the Internet that are unique rather than traits the Internet shares with other technologies. Courts analyzing jurisdiction in Internet cases devote inordinate effort to considering where conduct occurs. Trammell and Bambauer argue that this is a fruitless exercise because the Internet diffuses activity across geographic borders. Conduct clearly occurs at the location where a person creates content or files disseminated through the Internet, but identifying other locations as salient to jurisdiction seems arbitrary. A natural response to their argument is that one particular location is not arbitrary: the place where an injury occurs. But identifying the place of injury is difficult when the harm is intangible. When the location of injury is intangible, Internet cases are similar to non-Internet cases. For example, regardless of the technology used to defame a person or infringe a trademark, identifying the locus of a person’s reputation or intellectual property requires a theory of how intangible interests map onto physical space. Accordingly, the authors argue that Internet cases are difficult not because the Internet uniquely obscures the location of conduct, but because the Internet is the latest technology to raise the vexing question of where intangible injuries occur.

Having shifted the focus from the location of Internet activity to the location of intangible injuries, Trammell and Bambauer propose a new test. The test relies on what they identify as three “first principles” of personal jurisdiction doctrine: the exercise of state power should not be arbitrary, jurisdiction should be predictable, and the forum should be fair for the defendant. The authors also contend that jurisdictional rules should be “efficient.” From these principles, the authors derive a rule: “Internet-based contacts should rarely, if ever, suffice for personal jurisdiction.” For example, jurisdiction would not exist in the plaintiff’s home state based merely on the local availability of a website infringing a trademark. In contrast, if a seller uses the Internet to facilitate sales of tangible objects to a buyer in the forum, jurisdiction would exist because the physical delivery of goods to the forum would be a relevant contact even if the web-based sales platform is not.

The article makes an important contribution to the literature by pinpointing why the Internet raises difficult personal jurisdiction problems. Courts and commentators have struggled with Internet cases in part because the Internet is often a red herring. When a case involves physical injuries in the forum, the fact that the Internet facilitated the conduct leading to those injuries may be irrelevant because doctrine pre-dating the Internet is available to assess the nexus between conduct and physical harm. In contrast, when a case involves intangible harm, the difficult question is: where did the injury occur? If the harm cannot plausibly be localized, then the fact that the case involves Internet contacts highlights the defendant’s tenuous contact with the forum. In this scenario the use of the Internet does not create a new problem, but rather places an old problem into starker relief. Academic and judicial attention should therefore focus on the older problem by considering how personal jurisdiction doctrine should apply in cases involving intangible harm. That inquiry can, in turn, provide insights that make the newer problem about Internet contacts less confusing.

The authors’ proposals are carefully reasoned, but there is room for debate because the article’s rejection of jurisdiction based on Internet contacts rests on three contestable conclusions. First, the article assumes that localizing intangible harm is difficult. Yet one can imagine arguments that particular types of intangible harms are experienced most acutely in the place where a victim resides or is domiciled, as are many tangible harms. If so, then a distinction between tangible and intangible injuries should not be the basis for a blanket rule deemphasizing Internet-based contacts. Second, if an intangible harm can be localized, then jurisdiction would often be appropriate under precedent considering whether the defendant “aimed” at and caused “effects” in the forum. The authors briefly recommend overruling the effects test, which partially rests their critique of Internet-based jurisdiction on the viability of a broader critique of modern personal jurisdiction jurisprudence.

Finally, some theories of personal jurisdiction (including mine) do not emphasize predictability and efficiency as heavily as this article does, instead placing greater weight on the forum state’s interest in facilitating local adjudication. For example, the article suggests that if a hacker intentionally copies private data from a server in the forum, jurisdiction would not be appropriate because hackers are often “indifferent” to or may not know the server’s location. However, an alternative theory would posit that if a person intentionally hacks into servers without knowing or caring where they are located, he assumes the risk of being sued in the state where injury occurs. (For a non-Internet version of the assumption of risk scenario, imagine that the owner of a small pharmaceutical company sneaks into a competitor’s plant and intentionally adds poison to a bottle of cough syrup with the intent of killing a consumer, but without knowing or caring where the bottle will be sold. Should the poisoner’s geographic indifference immunize him from jurisdiction in the state where the victim purchases and consumes the poison?)

Trammell and Bambauer have developed a thoughtful critique of how current personal jurisdiction doctrine addresses the Internet. Further scholarship will benefit from their distinction between the location of Internet activity and the location of its intangible consequences.

Cite as: Allan Erbsen, Personal Jurisdiction Based on Intangible Harm, JOTWELL (November 16, 2015) (reviewing Alan M. Trammell & Derek E. Bambauer, Personal Jurisdiction and the “Interwebs,” 100 Cornell L. Rev. 1129 (2015)), http://courtslaw.jotwell.com/personal-jurisdiction-based-on-intangible-harm/.
 
 

Making Sense of Plurality Decisions

Ryan C. Williams, Questioning Marks: Plurality Decisions and Precedential Constraint (forthcoming).

In Questioning Marks, Ryan Williams tackles a piece of Supreme Court doctrine that many dismiss with the back of their hand: how to make precedential sense of the Court’s plurality opinions. Oh sure, we all begin with the statement in Marks v. United States that lower courts should ascribe precedential weight to the “holding” of the case, understood as “that position taken by those Members who concurred in the judgments on the narrowest grounds.” But that formulation obscures any number of difficulties. How does a lower court identify the narrowest grounds of the shared decision that produced a judgment that was supported by separate reasons that failed to offer clear guidance in future cases?

Williams first shows that lower courts have taken a range of different approaches to the problem of identifying the narrowest grounds. Some look for an implicit consensus among the five (or more) concurring Justices, others give pride of place to the notion that the Justice casting the fifth vote must have played a decisive role in the outcome and so treat the opinion accompanying that swing vote as controlling. Still others adopt an issue-by-issue approach, looking for the alignment of Justices who expressed agreement with a particular proposition that may be relevant in future litigation. Somewhat controversially, this issue-by-issue approach may also consider the views of dissenting Justices, a group seemingly omitted from the Marks reference to the members concurring in the judgment.

The way lower courts approach these matters may reflect their conception of the nature of a hierarchical judiciary and of their obligations as lower courts. For courts inclined to predict outcomes at the Supreme Court, a tendency to emphasize the fifth vote seems natural–that was the vote needed to nail down the judgment. Others with a bent towards prediction happily consider dissenting views, knowing as they do that the dissenters will likely weigh in on any future question along the lines they have articulated in earlier opinions.

But both approaches can produce real anomalies. Williams tells one dispiriting tale of the lower court reaction to Shady Grove Orthopedic Assocs. v. Allstate Insurance Co. There, as our gentle readers will recall, the Court divided on whether to apply Federal Rule of Civil Procedure 23 (and to displace the New York state prohibition on the aggregation of certain claims) or to defer to state law in a diversity case. A four-Justice plurality, led by Justice Scalia, held that Rule 23 applied and was valid under the Rules Enabling Test articulated in Sibbach v. Wilson & Co. A four-Justice dissent would have viewed Rule 23 as inapplicable, deferring to state law for a complex set of reasons reminiscent of those offered in Gasperini v. Center for Humanities. Justice Stevens cast the fifth-and-deciding vote, agreeing with Justice Scalia in part but arguing that Sibbach was misunderstood to uphold all “arguably procedural” rules. Only Justice Stevens gave voice to his limited conception of Sibbach. We do not know how widely shared his views were; we only know that he was alone in expressing them.

Yet the lower courts have seemingly given effect to Justice Stevens’ opinion on the theory that his was the fifth and deciding vote. This seems particularly wrongheaded, at least as to Justice Stevens’ views about Sibbach. While he may be right, he certainly did not speak for five Justices on that subject. So it is a bit dismaying to learn that his views have taken hold. Even more troubling, according to Williams, lower court decisions do not explore the issues, opting instead for a rather wooden invocation of the Stevens view as controlling by virtue of being the fifth vote.

Williams would solve the Marks problem by calling for a “shared agreement” approach, in which lower courts give precedential effect only to those matters on which a five-Justice majority reached a shared agreement. That approach might, for example, justify the lower courts in extending the Court’s fractured decision that citizens of the District of Columbia are properly regarded as citizens of a state for diversity purposes. While no rationale gained a majority, five Justices did agree on a result that might well apply to citizens of other territories (such as Puerto Rico), as the lower courts later held. But it certainly would not give effect to Justice Stevens’ lone view in Shady Grove.

I found much to like in the paper: a strong command of the cases, a rich theoretical framework in which to evaluate the issues at hand, and a calm and authoritative authorial voice that lets the reader know she is in good hands. I was especially pleased that Williams chose to tackle the problem because it seems most unlikely that the Supreme Court will provide further guidance. The Justices seem far more likely to address a particular lower court disagreement than to nail down a methodological approach to past plurality opinions that might ramify far beyond the particular case, unsettling some bodies of law and producing outcomes that current Justices can neither predict nor endorse. A tip of the hat to Williams for providing a solution that commends itself to courts and theoreticians alike.

Cite as: James E. Pfander, Making Sense of Plurality Decisions, JOTWELL (November 2, 2015) (reviewing Ryan C. Williams, Questioning Marks: Plurality Decisions and Precedential Constraint (forthcoming)), http://courtslaw.jotwell.com/making-sense-of-plurality-decisions/.
 
 

Class Action Mismatch: Securities Class Action Jurisprudence and High-Frequency Trading Manipulation

Tara E. Levens, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, 82 U. Chi. L. Rev. 1511 (2015).

For faculty members with retirement savings in TIAA-CREF or brokerage accounts, market events of summer 2015 might prompt the conclusion that August is the cruelest month of all. Along with millions of other small investors, academics throughout the United States could only watch helplessly as volatile markets took shareholders on a daily roller-coaster ride resulting in devalued accounts.

In the wake of the 2008 market crash, small investors have become increasingly educated about the structural and institutional drivers of extreme market volatility: automatic, computerized trading techniques over which the small, individual stakeholder has little knowledge or control. Most prominent among these market innovations has been the advent of computerized, high-frequency trading (HFT), driven by mathematical algorithms.

In her thoughtful and innovative comment, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, Tara E. Levens explores the interesting question whether the prevalence of HFT techniques resulting in massive financial losses to small-stake investors will open the door to new securities class actions. Her general conclusion is that current legal theories undergirding various types of securities law violations are mismatched with the harms induced by HFT. Consequently, Levens attempts to formulate a jurisprudence for new securities class actions based on the unique injuries resulting from HFT manipulation. In essence, Levens’ task is a riff on the theme of fitting new wine into old bottles.

Levens first describes the types of investor harms addressed under current securities laws, most notably liability for fraudulent misrepresentation under § 10(b) and Rule 10b-5 of the Securities and Exchange Act of 1934. She suggests that the harms induced by HFT are a poor fit for conventional securities fraud claims. Instead, she pivots to theories of open-market manipulation, which she believes better capture the factual basis for seeking relief.

She notes that plaintiffs may bring claims of open-market manipulation under § 10(b), although “such claims have received ‘curiously little attention’ from plaintiffs, prosecutors, and the courts.” (Pp. 1514–15.) She further suggests that plaintiffs might bring claims of open-market manipulation under § 9 of the Act, but such actions require a showing of specific intent. Because of the difficulty in pursuing relief under § 9, Levens indicates that plaintiffs and prosecutors rarely rely on this provision when bringing manipulation proceedings.

To provide context for her recommendations, Levens analyzes developments in securities class litigation, focusing on the Supreme Court’s elaboration of the fraud-on-the-market presumption that relieves plaintiffs of the necessity to show individual reliance in fraud cases. She suggests that the Court’s 2014 Halliburton decision changed the landscape of securities-fraud class litigation by enhancing the role of expert witness “impact studies” used to demonstrate the effect of an alleged fraud or misrepresentation on a stock’s price, which may determine whether the fraud-on-the-market presumption applies. However, she refrains from concluding whether the increased use of impact studies will benefit either plaintiffs or defendants, or result in more or fewer class certification approvals.

Against this doctrinal backdrop, Levens discusses in great technical detail what constitutes high frequency trading, a subset of algorithmic trading. She explains two types of HFT: market-making activities and more aggressive strategies such as statistical arbitrage. Levens’ article provides an intelligible, accessible account of HFT for less-knowledgeable readers. She concludes by surveying the heated debate over the effects of high frequency trading on market efficiency.

Levens highlights exactly how novel the problem of HFT is on the legal landscape. She notes that the SEC has yet to promulgate formal rules or regulations relating to HFT. According to Levens, the SEC increased its enforcement efforts after the Flash Crash of May 2010, but studies are inconclusive whether HFT or other factors triggered that market collapse. The SEC brought its first market manipulation case against an HFT firm only in October 2014. That action was pursued under Rule 10b-5 and the alleged perpetrator agreed to pay a fine and to cease and desist from further violations of the securities laws.

Levens believes that the spread of HFT and consequent market collapses has set the stage for a resurgence of the open-market manipulation theory. She suggests that plaintiffs who wish to bring claims against HFT firms might succeed by combining various theories of open-market manipulation with the fraud-on-the-market presumption; this hybrid strategy allows plaintiffs to avoid the more stringent intent requirements of § 9, while also availing themselves of the liberal fraud-on-the-market presumption to avoid potentially difficult reliance issues. Levens notes that the fraud-on-the market presumption generally has not been available to plaintiffs alleging market manipulation claims, but she contends that in some situations courts have held otherwise.

Finally, Levens addresses whether high-frequency traders ought to have a private right of action to redress their own injuries, something no commentator has addressed. While noting that traders do not represent the most sympathetic group of claimants, she indicates that traders also may suffer losses from HFT. Analyzing this problem, Levens concludes that HFT traders most likely will have a very difficult time satisfying the requirements for certifying a class action under Fed. R. Civ. P. 23, showing loss causation, or proving reliance.

As Levens correctly points out, HFT issues are likely to continue to surface in litigation, presenting litigants and courts with an array of novel legal problems. She concludes that “regardless of whether high-frequency traders come to court as plaintiffs or defendants, the advent of HFT marks a changed circumstance that the securities-litigation bar will have to wrestle with in the near future.” (P. 1557.)

Levens, the incoming Editor-in-Chief of the University of Chicago Law Review, has produced an impressively sophisticated piece. She has identified a set of emerging legal issues and grappled with existing doctrine as applied to new problems. Even if her hybrid approach proves unsound, she is to be commended for undertaking such an ambitious, challenging topic and, in the best tradition of young scholarship, thinking outside the box.

Cite as: Linda Mullenix, Class Action Mismatch: Securities Class Action Jurisprudence and High-Frequency Trading Manipulation, JOTWELL (October 19, 2015) (reviewing Tara E. Levens, Too Fast, Too Frequent? High-Frequency Trading and Securities Class Actions, 82 U. Chi. L. Rev. 1511 (2015)), http://courtslaw.jotwell.com/class-action-mismatch-securities-class-action-jurisprudence-and-high-frequency-trading-manipulation/.
 
 

Controversial Supreme Court Appointments – A Blockbuster in the Foreign Films Category?

Often we like scholarship lots because it reflects new or interesting perspectives on familiar subjects. Sometimes, though, the story itself is so thought-provoking that a good telling is all that is needed to make the article worth commending to Courts Law readers.

Such is the case with Hugo Cyr’s article, which chronicles the highly charged engagement between the Supreme Court of Canada and the Canadian Government (the Executive, comprised of members of the ruling political party) over the fundamental requirements for their respective legitimacy. Everyone seems to agree that the incidents recounted were “unfortunate” in that they provoked strong expressions of differences in what has historically been regarded as a relationship to be managed tactfully. Yet the events exposed many intriguing issues about how best to conduct this critical relationship to promote the continuity and flexibility needed to serve well the interests of the public.

It is hoped that this brief summary will, like the fast-paced trailers that preview a movie’s highlights, whet the appetites of JOTWELL readers to press “watch” in the link above and follow the entire story. But, first, a word or two of background – much like the lines that appear by way of explanation in the first scene of the movie:

Like the Supreme Court of the United States, the Supreme Court of Canada serves a range of important constitutional roles, including as arbiter of federal relations. Indeed, so trusted has it become in that role that its unusual jurisdiction to render advisory opinions on the constitutionality of proposed legislation has been invoked on a number of occasions in recent years to provide perspective on potentially controversial political initiatives.

The implications of the Court’s many sensitive roles for judicial appointments – both for who is selected and for the appointment process – will be obvious to U.S. readers. The academic discussion and political commentary on the judicial appointment process in Canada has developed more slowly than in the U.S.; it has become more prevalent with the advent and interpretation of the Charter of Rights and Freedoms, although there is not yet any formal approval process.

Still, there has long been acknowledgement of the need for judges to be representative of Canada’s juridical diversity. The Supreme Court Act makes provision for three of its nine judges to be appointed from Québec. Customarily (i.e., not by law), the remaining six judges include three from Ontario, two from the provinces to the west and one from the provinces to the east. Since the Court is a court of general appellate jurisdiction, the inclusion of three members from Québec ensures that the Court, which may sit in panels of five, seven or nine, could arrange its sittings so that a majority of the panel deciding an appeal from Québec or having important implications for Québec-federal relations would be from that province.

As a practical matter, the Court operates in a much more collegial way than this would suggest. Many of the judges are functionally bilingual; they typically include young lawyers trained in Québec among their law clerks; and from time to time it even happens that Québec judges take the lead in writing majority or unanimous judgments on issues of the common law.

With the scene set, the drama begins with the need to appoint a successor for one of the retiring Québec judges. The incumbent Government, known for emphasizing expedience over politesse, conducted a process that did not ease these concerns by selecting a member, who was from Québec, of the Federal Court of Canada (a lower federal court).

Unfortunately, it is not clear that Supreme Court Act permits this. The provision for appointing judges from Québec sits within the larger context of the provision establishing the basic requirements for eligibility for appointment. To be eligible, a person must be either a current or former member of a superior court of a province, or a lawyer who has been a member of the profession for at least ten years. The language of the provision for appointing judges from Québec appears to add an additional requirement–that the appointee must be a current member of the Québec judiciary or the profession. Federal Court judges are neither. (Judges from Québec have been appointed without this qualification in the past, but not as one of the three Québec judges.)

Aware of the problem, the Government obtained advance opinions from well-respected past members of the Supreme Court and announced Marc Nadon’s appointment. In the course of the following week, he was interviewed by an ad hoc committee of parliamentarians to answer general questions about himself and his career and he was sworn in. His appointment was challenged immediately in the Federal Court, and the Supreme Court announced that he would not participate in matters for the time being. Later that month, the Québec National Assembly adopted a unanimous motion rejecting the appointment as deplorable unilateralism depriving Québec of its guaranteed “répresentation” on the Court.

These events are just the fast-paced opening scene of a fascinating story. The plot thickens as the Government attempts to sideline this opposition by introducing legislative amendments to correct the problem after the fact and by seeking an advisory opinion from the Supreme Court to bless its actions. There is considerable character development as the majority of the Supreme Court interprets its legislation as invalidating the Government’s appointment and goes on to pronounce the proposed legislation also constitutionally invalid. The drama continues as the Court offers the view that its composition and “essential features” are, or have become, entrenched and would require formal constitutional amendment to be modified. (In Canada, a constitutional amendment would be virtually impossible.)

The story’s ending necessarily remains open. But Professor Cyr astutely identifies the uneasy sense in which the Government’s bold action provoked a strong response from the Supreme Court that may have jeopardized the flexibility required to maintain good working relations between the Judiciary and the Executive. Such “cooperative federalism” has served Canada well in the past and may be much in demand to meet the challenges ahead in a changing world. Though the progress of this relationship is inevitably shaped by the specifics of Canadian law and politics, stories like this can be of larger comparative interest to those whose passion is Courts law.

Cite as: Janet Walker, Controversial Supreme Court Appointments – A Blockbuster in the Foreign Films Category?, JOTWELL (October 5, 2015) (reviewing Hugo Cyr, The Bungling of Justice Nadon’s Appointment to the Supreme Court of Canada, 67 Sup. Ct. L. Rev. 73 (2014)), http://courtslaw.jotwell.com/controversial-supreme-court-appointments-a-blockbuster-in-the-foreign-films-category/.
 
 

The Keepers of the Federal Courts Canon

Richard Fallon, John Manning, Daniel Meltzer, and David Shapiro, The Federal Courts and the Federal System (7th ed., 2015).

There are casebooks, and then there’s Hart and Wechsler’s The Federal Courts and the Federal System, the brand-new seventh edition of which arrived this summer. It may seem odd to focus so much attention on the latest edition of a casebook that has been around since before the Brooklyn Dodgers won their only World Series. But this newest iteration by Richard Fallon, John Manning, Daniel Meltzer, and David Shapiro is, for reasons I elaborate upon below, worthy of its own adoration—and should hopefully entice scholars who have long sought other teaching materials to return to the gold standard.

I

IMG_6614As Akhil Amar has explained, the first edition of “Hart and Wechsler,” published in 1953, “succeeded in defining the pedagogic canon of what has come to be one of the most important fields of public law in late twentieth-century America,” i.e., Federal Courts. And whereas most other legal disciplines preceded the casebooks that purported to define them, Hart and Wechsler all but created not just a curriculum for Federal Courts classes, but also a far deeper sense of why such a course was worth teaching—and taking.

In the process, Hart and Wechsler did not just define the Federal Courts canon; it also served as a bible for the then-nascent legal process school and its focus on “how substantive norms governing primary conduct shape, and are in turn shaped by, organizational structure and procedural rules.” Hart and Wechsler thus provoked generations of students and teachers alike to struggle with one of the most important questions in twentieth-century public law: why federal courts? With Brown (and the Warren Court) right around the corner, its timing could not have been better.

But there was a dark side. Modeled in part on Henry Hart’s landmark 1953 Harvard Law Review article that was itself dialectic, the book was wonderful for everything except teaching. It was maddeningly rhetorical, hyper-dense, and included far too much significant material in the footnotes and the notes after cases. It also gave incredibly short shrift to any number of vital doctrines and theories that were in tension with the views of Professors Hart and Wechsler themselves and to landmark Supreme Court decisions (Martin v. Hunter’s Lessee, most famously) that were difficult to reconcile with the book’s broader, state-court-oriented thesis. In the same space, then, Hart and Wechsler both defined the field and made it terribly difficult to teach. In a contemporaneous review of the first edition, Edward Barrett suggested that, “From his first using of the book this reviewer learned how little he really knew about the subject. After the tenth time through the book he expects still to be learning, still to be wondering what the answers are to many of the questions posed by the authors.” Barrett meant it as a compliment; generations of law students reacted somewhat less charitably.

The second and third editions, published (mostly coincidentally) at the close of the Warren and Burger Courts, respectively, brought with them remarkable substantive improvements even as the ground shifted under the Federal Courts terrain. As Amar wrote in his review of the third edition, “It is not easy to be both gracious and incisive, but the editors here pull off this combination with remarkable skill.” But that graciousness and incisiveness came at the expense of teachability. The treatise-like third edition checked in at nearly 1900 dense pages—packed with notes and footnotes to cover virtually every permutation that could arise from the issues covered in the primary cases. Hart and Wechsler had become unparalleled as a desk reference—as Chief Justice Roberts highlighted at his 2003 confirmation hearing to the D.C. Circuit—and unteachable to all but the most sado-masochistic law students. It was thus no surprise that the universe of Federal Courts casebooks began to expand at about the same time—from only a handful to over a dozen.

Perhaps because they were published more regularly (in 1996, 2003, and 2009), and perhaps because of the passing of the torch from Paul Bator and Paul Mishkin to Fallon, Meltzer, and Shapiro, the ensuing three editions paid successively more attention to teaching and teachability. Even as the canon grew to encompass novel legal questions raised by current events such as AEDPA and the government’s response to 9/11, the successive editions slowly both began (1) to shrink; and (2) to replace rhetorical questions with declaratory summations of doctrinal rules. It was progress, but it was slow.

II

Against that backdrop, the seventh edition is a remarkable achievement for what it both does and does not do. As the editors explain in the Preface,

we have worked hard to make this edition as user-friendly and teachable as possible. In a number of places, we have prefaced leading cases with brief introductory notes, to explain to students how cases and materials that they are about to read fit into an emerging historical or doctrinal picture. . . . In addition, users of prior editions will notice that although our Notes continue to probe the most challenging problems that lawyers, judges, and lawmakers confront, we have reduced the number of sentences that end in question marks. Where we think we have guidance to offer, we have more frequently stated our views explicitly. Many questions remain, but few are rhetorical or repetitive.

And lest there be any doubt, the book bears out the editors’ promise. A bevy of new introductory notes helps students (to say nothing of their teachers) connect the dots from one section to the next. The editors also have trimmed still more fat (the book is now down to 1466 pages), even while adding lengthy treatments of new principal cases, such as Stern v. Marshall, and updated notes for old chestnuts, from Erie to Lincoln Mills to Sabbatino.

But what makes the seventh edition’s greatly improved teachability so remarkable is that it has not come at the expense of continuity. Although the book asks fewer rhetorical questions than its predecessor editions, the same fundamental provocation—why federal courts—remains, alongside the same principled effort to challenge the assumptions of readers of any and all political, ideological, and/or philosophical persuasions. In an age in which judicial decisions are increasingly perceived as reflecting partisan, result-oriented reasoning, Hart and Wechsler offers a principled alternative—an enduring effort to suggest that there truly are neutral principles governing much, if not most, of the work of federal judges. One need not accept that view of the federal courts in practice to understand its normative attractiveness.

To be sure, there is still plenty of work to be done. And Meltzer’s untimely passing of on the eve of the seventh edition’s publication only adds to the challenge facing Fallon, Manning, and Shapiro. But perhaps the biggest challenge to the keepers of the Federal Courts canon is the underlying project: As Congress and the Supreme Court continue to constrain the scope of civil remedies available to state and federal prisoners and all others seeking to challenge alleged government misconduct, the question becomes whether Federal Courts as a project might eventually descend into nihilism—and Hart and Wechsler reduced to a work of history. All seven editions provide their own literal and figurative counterweight to that trend. One can only hope that the dramatic improvements to the latest iteration mean that more students and teachers—and, through them, more judges and policymakers—heed their lessons.

Cite as: Steve Vladeck, The Keepers of the Federal Courts Canon, JOTWELL (September 22, 2015) (reviewing Richard Fallon, John Manning, Daniel Meltzer, and David Shapiro, The Federal Courts and the Federal System (7th ed., 2015)), http://courtslaw.jotwell.com/the-keepers-of-the-federal-courts-canon/.
 
 

A Pragmatic Approach to Interpreting the Federal Rules

Elizabeth G. Porter, Pragmatism Rules, 101 Cornell L. Rev. (forthcoming, 2015), available at SSRN.

With seventeen decisions interpreting the Federal Rules of Civil Procedure in the last decade, the Roberts Court has doubled the number of cases decided by its predecessor, the Rehnquist Court, in the same amount of time. This record-breaking streak has given scholars a unique opportunity to examine the contours and direction of the modern civil litigation system. Elizabeth Porter has taken this opportunity to discern the interpretive methodologies used by the Roberts Court when deciding Rules cases. In doing so, she makes a unique contribution not only to the literature on civil process, but also to the study of interpretation, focusing it away from statutes and instead onto the Rules.

At a time when much is at flux in the procedural world, in Pragmatism Rules, Porter discerns two primary competing interpretative methodologies in the Roberts Court’s Rules opinions. On the one hand, the Roberts Court interprets the Rules using the familiar tools of statutory interpretation. This go-to mode, although imperfect, works to provide rational, clear, and predictable outcomes. To the extent that Rules are like statutes, the Court can rely on the familiar markers of text, structure, and purpose when deciding Rules cases. The Court justifies its reliance on this mode by reminding the lower courts and parties that rule changes must come from the rulemaking process, not judicial adjudication.

On the other hand, at times the Roberts Court has taken a more hands-on approach, actively managing the litigation as if it were a trial judge. This “managerial” mode breaks from the rule-statute analogy, allowing the Court to rely on precedent, specific application of the law to the facts, and public policy considerations. In this mode, Advisory Committee Notes give way to equity and the Court leans on common-law judicial power and pragmatism. Porter observes how managerial judging has “trickled up” the food chain, resulting in Wal-Mart’s heightened Rule 23(a)(2) commonality standard, Twombly and Iqbal’s more rigorous plausibility pleading, and Scott v. Harris’s usurpation of lower court and jury power in summary judgment determinations involving video evidence.

Scholars have criticized both interpretive modes. Traditional statutory interpretation has been criticized for being overly textualist, to the detriment of a Rule’s underlying purpose. Managerial interpretation has been criticized as overreaching and potentially abusive of judicial discretion. Porter carefully threads the needle by rejecting and embracing both. She concludes that both interpretive paradigms not only are here to stay, but are the result of tensions that exist in the very fabric of the Rules themselves and the rulemaking process. Although she identifies the importance of pragmatism to the Court—as reflected in the article’s title—she turns out to be a pragmatist, too.

Porter explores three tensions (what she calls “fault lines”) that characterize the Rules and rulemaking process. She takes a deep dive on characterizations often noticed, but not fully explored.

The first tension is structural. Porter puts her finger on a fundamental institutional tension in the Court’s relationship to civil process. The Court is both legislative rule maker and judicial adjudicator. Subject only to rarely-used congressional override, the Court is the architect of the Rules, enjoying veto power over the Standing and Advisory Committees below. Thanks to the Rules Enabling Act, the Court has more skin in the game than it would otherwise. But this makes the Court’s role all the more confusing, as it must now interpret its own creation.

The Court has recently been criticized for altering the Rules by judicial fiat in cases such as Wal-Mart, Twombly & Iqbal, and Scott. Such decisions, Porter contends, exhibit a lack of judicial restraint by the Roberts Court. Porter reminds us that the Court should not legislate from the bench. But does it do that already? Arguably. But as only one part of a seven-step process that includes significant public participation and transparency, the Court’s formal role in rulemaking may not be as robust as imagined.

Scholars have described the Court’s adjudicative power as akin to that of an administrative agency. The Court has both a unique relationship with the Rules and broad and inherent power to interpret texts, no matter what genre. And where the Advisory Committee comes up short (as in failing to amend the Rule 8 pleading standard), the Court is available to step into the breach. Porter ties these two competing perceptions of the Court – “rung on the technocratic ladder” and “adjudicator-in-chief” (P. 32.)—to the Court’s two modes of Rule interpretation, noting the Court’s nimble exploitation of each:

When it wants to declaim interpretive power, the Court interprets the Rules narrowly using traditional statutory interpretation tools, and urging dissatisfied parties to seek recourse through rulemaking. But when it is frustrated with the rulemaking process or otherwise wants to recalibrate litigation norms, the Court toggles seamlessly into the other paradigm—the paradigm of broad, almost unbounded, common law power. (P. 32.)

The second tension is linguistic. The language of the Rules themselves creates a schism in interpretation. The text is deliberately crafted to maximize the Court’s discretion and flexibility to achieve procedural due process. But the text may be so ambiguous—if not downright poetic—that the Court has too much play, thereby undermining a uniform interpretive theory.

Porter anchors her examination of the linguistic tension in Rule 1. Porter explains that the “master Rule” was drafted as “a statement of interpretive methodology” (P. 33.), designed to steer decisions away from procedural formalism and toward resolution on the merits. She bemoans the fact that the Rule seems to have lost its original moorings, and is instead used to sell cost-savings and systemic efficiency, all of which promote managerial Rule interpretation. Undervalued by scholars and courts, Rule 1 has been made more vulnerable to this pitch. Of course the Rule’s text itself (calling for “speedy and inexpensive” determinations) gives license to the managerial interpretive mode and, unsurprisingly, sends mixed messages.

Porter observes that the language of the Rules—with their roots in equity—often wax poetic, inviting rule interpretation that is highly discretionary, factually based, and purpose-driven. This freedom, invited by such poetic text, has been applied not only by the district courts—who are tethered to a factual record and live litigants—but by the Supreme Court, as well. Porter concludes that the Roberts Court has exploited this freedom, enabling it to disrespect the abuse-of-discretion standard of review and rebuke judicial restraint.

The third tension is epistemological. Porter explores how two unresolvable dichotomies mirror, if not create, the Court’s statutory and managerial interpretive modes. One dichotomy is between substance and procedure. When this divide is clean, it supports a statutory interpretation of the Rules. When messy, it invites a managerial interpretation. The other dichotomy is between Rule trans-substantivity and case- and fact-specific Rule application. When the Court uses statutory interpretation, trans-substantivity is clearly valued; but when the Court favors managerial interpretation, fidelity to trans-substantivity wanes.

Porter does a great job teeing up these three tensions or interpretive fault lines and exploring how they might explain, if imperfectly, the conflicting paradigms the Court uses when interpreting the Rules. But Porter doesn’t stop there. In addition to identifying the Court’s contradictory interpretative paradigms and fleshing out the underlying tensions in the Rules and rulemaking process that undergird and concretize such paradigms, she attempts to reconcile these uncomfortable contradictions with a proposal modeled on administrative law.

Because the Rules resemble agency regulations more than statutes, Porter draws from administrative law when crafting a way for the Court to properly employ both statutory and managerial interpretive modes. Porter proposes a framework that supports the Court’s reliance on statutory interpretation and de novo review for cases dealing with pure questions of law and managerial interpretation for cases dealing with the application of Rules to the facts. In the latter, if the Court is faced with a merits question, the Court should remand to the lower court to apply the Court’s new Rule interpretation. This Chevron-type deference scheme strives to preserve Court flexibility while checking overreaching and abuse of power. Porter makes the important and unique observation that the real problem may be that the Roberts Court fails to give proper deference to the lower courts, rather than to the rulemakers.

While recognizing the legitimacy and value of both the statutory and managerial interpretive modes, Porter concludes that this new theoretical framework is necessary to reign in the Roberts Court’s usurpation of the district courts’ managerial discretion. She contends that the Court has not only created new procedural standards through Rule interpretation, but aggressively inserted itself into merits determinations belonging squarely to the courts below. Thus, the problem is not one of Rule interpretation, but of proper deference. Porter contends that a Chevron-inspired framework would promote transparency that discourages merits-based overreach and return the Roberts Court to the minimalist procedural decisions of the Rehnquist Court.

Porter concludes by challenging us to remember the uniqueness of the Rules—as neither statutes nor agency regulations—and the concomitant value of creating an interpretive theory that recognizes both statutory and managerial Rule interpretation. Her proposal starts us down this important and groundbreaking path.

Cite as: Suzette M. Malveaux, A Pragmatic Approach to Interpreting the Federal Rules, JOTWELL (August 11, 2015) (reviewing Elizabeth G. Porter, Pragmatism Rules, 101 Cornell L. Rev. (forthcoming, 2015), available at SSRN), http://courtslaw.jotwell.com/a-pragmatic-approach-to-interpreting-the-federal-rules/.
 
 

Rationing Constitutional Justice

Aziz Huq, Judicial Independence and the Rationing of Constitutional Remedies, 65 Duke L. J. __ (forthcoming 2015), available at SSRN.

It is easy to forget sometimes that our hallowed federal courts are a collection of organizations and therefore subject to the mundane limitations that organizations face.The judges who compose those organizations must determine how to wade through hundreds of thousands of cases each year—a task that has become more challenging in the past few decades, as the ratio of cases to judges has increased. Judicial administration scholarship has long sought to understand how increases in caseload affect court procedure and practice. More recently, scholars have tried to assess how caseload can impact substantive law.

Against this background, Aziz Huq makes a significant contribution with his forthcoming article, Judicial Independence and the Rationing of Constitutional Remedies.

Huq begins with a stark observation: Article III adjudication is now a scarce good. He notes that in addition to a rising caseload, federal courts must contend with the fact that settled constitutional rules are broken on a daily basis. In particular, Huq argues, the constitutional criminal procedure rules developed by the Warren Court are consistently flouted. This constitutional problem quickly has become an organizational one, as the courts lack the ability to provide relief in all cases challenging violations of these rules, given their current resource constraints. Some rationing of constitutional remedies is an “inevitable” result. The question that follows is how courts have taken up that task.

Huq first argues that the Supreme Court has established a “gatekeeping” rule of fault for individualized constitutional remedies in a range of areas. That is, constitutional litigants must show not only that the Constitution was violated, but that a clear and unambiguously applicable constitutional rule was self-evidently violated. By adopting such a rule in different contexts, the Court has necessarily raised the threshold for success in constitutional litigation, meaning that fewer parties will be able to come to federal court and win relief. Huq does a wonderful job tracing how this litigation-limiting rule applies in a range of contexts, from constitutional torts to exclusion of evidence in criminal prosecutions to habeas corpus. The result is a comprehensive account of how the fault rule has thoroughly permeated, and restricted, constitutional remediation.

Huq next explores the reasons behind the doctrinal expansion of the fault rule. He notes that scholars focusing on the rise of the rule in the past have told a standard causal story in which the ideological interests of the Justices and various historical circumstances play the primary roles. Without disputing the importance of these factors, Huq provides a fuller account by adding a new, hitherto underapperciated factor—what he calls “judicial independence.” That is, the Court has developed doctrines at least partially to further its own institutional interests, notably a desire to decrease the workload of the federal judiciary while simultaneously increasing its prestige. The rise of the fault rule thus should be seen as directly tied to the rise in pressures on the federal courts in the late 1970s and 1980s.

The article makes several important contributions. In addition to providing a comprehensive account of the reach of the fault rule, Huq convincingly suggests a causal link between these doctrinal shifts and judicial self-interest. To be sure, the article cannot definitively prove causation, and Huq is clear on this point—he states that he can only provide circumstantial evidence to support the causal claim. That said, the evidence is strong and it carries a number of implications, whichuq briefly sketches at the close of the article.

Chief among the implications is that Huq’s account may shift our understanding of separation of powers. As he writes, one central component of separation-of-powers theory is that the autonomy of the judiciary is critical for vindicating individual constitutional rights. But if one accepts that the rise of the fault-based rule for limiting the availability of constitutional relief is due, at least in part, to the judiciary exercising its own autonomy to reduce workload pressures, then surely the traditional account should be questioned. Furthermore, read more aggressively, the evidence in Huq’s account suggests that the “successful institutionalization of judicial independence” can even undermine the “project of realizing constitutional rights.”

How judges do and should ration their own attention are questions of central importance. The answers to these questions define who gets what rights recognized and who gets what remedies. There is still a great deal of work to be done in this rich and important area at the intersection of constitutional law, judicial administration, civil and criminal procedure, and remedies. Huq’s article makes substantial contributions in this area and helps to set up other important work to come.

Cite as: Marin Levy, Rationing Constitutional Justice, JOTWELL (July 8, 2015) (reviewing Aziz Huq, Judicial Independence and the Rationing of Constitutional Remedies, 65 Duke L. J. __ (forthcoming 2015), available at SSRN), http://courtslaw.jotwell.com/rationing-constitutional-justice/.