The National Security Courts We Already Have

Robert Timothy Reagan, Fed. Jud. Ctr., National Security Case Studies: Special Case-Management Challenges (2013).

One of the longer-lasting consequences of the “Summer of Snowden” may well be the increased attention paid to the Foreign Intelligence Surveillance Court (FISC)—the special, secrecy-laden tribunal created by Congress in 1978 to oversee the U.S. government’s foreign intelligence activities. Among other things, greater public knowledge of the FISC’s role in both approving and circumscribing the government’s use of its secret surveillance authorities has rekindled the decade-old debate over the need for Congress to create special “national security courts.”

The animating justification for such tribunals is that, like the FISC, they would be in a better position than the ordinary Article III district courts to reconcile the central tension in national security adjudication: Balancing the secrecy pervading most national security and counterterrorism policies with the need to provide victims of governmental overreaching a forum in which to vindicate their statutory and constitutional rights. Indeed, although they have varied (at times, dramatically) in their details, proposals for specialized national security courts often hold out the FISC as the model upon which such tribunals can—and should—be based. To similar effect, many of the proposed reforms spurred by Snowden’s revelations have focused on increasing the volume and scope of litigation handled by the FISC, rather than shunting more of these issues into the federal district courts.

A quietly remarkable publication by the Federal Judicial Center’s Robert Timothy Reagan, written on behalf of the FJC, provides a powerful counterweight (both figuratively and literally) to such efforts. Reagan’s monograph is a case-by-case compilation of how different federal judges in regular Article III courts—87 in all—have resolved some of the unique and complex issues that arise in both criminal prosecutions and civil suits implicating national security. If a new case raises an issue concerning the admissibility of classified evidence, the guide provides 31 distinct examples summarizing how the issue arose previously and how it was resolved. So, too, for topics ranging from witness security to religious accommodations; from service of process on international terrorists to remote participation of witnesses; and from attorney-appointment questions to the usability vel non of evidence obtained under the Foreign Intelligence Surveillance Act (FISA). In short, for civil and criminal litigation alike, Special Case-Management Challenges is a comprehensive reference—a how-to guide for federal judges facing similar challenges in current and future cases.

In that regard, National Security Case Studies: Special Case-Management Challenges, is hardly typical fodder for a JOTWELL review. The new (fifth) edition, published in June, checks in at a super-dense 483 pages. It is exceedingly light on analysis, exceedingly heavy on footnotes (4294, if you’re scoring at home), and hardly a page-turner for even the most devout students and scholars of the federal courts, given its organization as a case-by-case guide to how different federal courts have handled the national security issues to come before them. Thus, after presenting detailed summaries of the factual background in which each of these issues arose, the book then recaps the individual ruling—and, where applicable, how it fits into broader doctrinal patterns. Indeed, as Reagan writes in the brief introduction, “The purpose of this Federal Judicial Center resource is to assemble methods federal judges have employed to meet these challenges so that judges facing the challenges can learn from their colleagues’ experiences.” As Reagan explains, Special Case-Management Challenges is largely a descriptive guide—in contrast to a separate FJC publication also authored by Reagan, titled National Security Case Management: An Annotated Guide, which more specifically highlights the specific lessons that might be learned from the ever-increasing volume of such jurisprudence.

Yet the unstinting focus of Special Case-Management Challenges on comprehensively “assembl[ing] methods” used in prior cases—“based on a review of case files and news media accounts and on interviews with the judges”—is exactly what makes it so compelling. And although its target audience is the federal bench, its utility and appeal actually sweeps far more broadly. What Reagan has compiled is not just a comprehensive set of data points, but a body of evidence tending to validate the ability of the ordinary civilian courts effectively to grapple with some of the thorniest challenges to arise in national security litigation.

Special Case-Management Challenges provides a powerful rejoinder to the (usually unsubstantiated) claim that the ordinary federal courts lack the capacity and/or institutional wherewithal to handle criminal cases involving high-profile terrorism suspects, civil suits challenging secret governmental counterterrorism programs, or anything in between. A veritable bevy of commentators—including sitting federal judges, policymakers, and academics—have offered anecdotal arguments to this effect in recent years. Reagan’s work fatally undermines that position, for it demonstrates how, in case after case, federal judges did what federal judges do—make accommodations where they were both necessary and appropriate (such as in the Abu Ali case, where Saudi intelligence officers were allowed to testify at a suppression hearing via a live, satellite link), and push back in cases in which they were not. Even a cursory scan of the work of the federal judiciary in this area suggests that, while national security litigation presents unique case-management challenges, those challenges are not uniquely beyond the competence of federal courts to resolve. In short, after reading through this monograph, it can no longer be said—at least not seriously—that Article III courts are unable to deal with such issues; the debate must shift to whether any of the proposed alternatives would do a better job.

Of course, reasonable people can—and always will—disagree over whether the federal courts are striking the right balance between the government’s interests and individual liberties in civil and criminal cases raising challenges unique to national security litigation. But insofar as these challenges are likely to remain with us for generations, the most important upshot of Reagan’s treatise is not just its account of how our federal judges have sought to resolve this fundamental tension, but that they have done so—and on an increasingly routine basis. Whatever else may be said about proposals for new national security courts, their biggest shortcoming is their failure to grapple with the national security courts that, as Reagan’s work shows, we already have.


Taking Public Adjudication Seriously: Recognizing the Importance of Timing in Party Rulemaking

Daphna Kapeliuk & Alon Klement, Changing the Litigation Game: An Ex-Ante Perspective on Contractualized Procedures, 91 Tex. L. Rev. 1475 (2013).

Anyone caught up in litigation—whether lawyer or litigant—would situate the recent interest in party rulemaking within the larger debate over the merits of maximizing party choice in dispute resolution. They would focus on setting appropriate limits on the practice of party rulemaking in order to balance the benefits of increased efficiency for the litigants and the public with the risk of abuse and the potential for bringing the administration of justice into disrepute. This perspective on party rulemaking often leads to a further analysis of the value of game theory in illuminating and assessing the range of outcomes that can emerge depending on how party choice is confined.

In their article, Daphna Kapeliuk and Alon Klement (members of the Radzyner School of Law, Interdisciplinary Center, Herzliya) engage with the leading U.S. commentators in this area, most notably Robert Bone’s Party Rulemaking, Making Procedural Rules Through Party Choice. They take the analysis beyond the interests of litigants and others in the system to show how party rulemaking can have important public implications and can, in effect, ‘change the litigation game’ itself.

Kapeliuk and Klement do this by highlighting a key feature of the analysis that is generally lost in concentrating on the strategic implications of particular choices and their potential to affect the outcome (both the specific result and the efficacy of the process). This key feature is the timing of the party-rulemaking—whether it is ex ante or ex post the emergence of the dispute. As they explain, changing the rules once a dispute has arisen may not really be much more than a function of party prosecution within the adversary system itself. This remains the case whether the party making the rules chooses among ordinarily available options or selects a procedure beyond the possibilities that are ordinarily available, thereby changing the rule. In any event, parties are less likely to cooperate to change the rules after a dispute has arisen, because in the adversary system, such a change will rarely be seen as in their mutual interests.

Changing the rules in advance of the emergence of a dispute is different. It may not be obvious to courts how ex ante, rather than ex post, party rulemaking affects the outcome of the dispute and the process by which it is reached. This does not mean that it should be ignored, however. As the authors explain:

When evaluating the impact of a pre-dispute procedural commitment on institutional values such as judicial integrity and legitimacy, courts should be aware that the outcome they observe is only one of many possible contingencies that could have materialized. The modified rule has transformed the parties’ relationship from the time they had agreed on it. It has affected their behavior in performing their contractual obligations, the probability that a dispute would arise, and their litigation behavior. All these effects have public implications that go beyond the parties and that have to be considered when enforcement of the parties’ agreement is at stake. Focusing on one contingency that has materialized misses the full range of public implications.

Perhaps more significantly, the changes to the rules that parties agree upon ex ante can be evaluated more readily in contractual terms for their implications for the traditional procedural values that underlie our civil justice system. Are these choices so unbalanced as to raise concerns about the relative bargaining powers of the parties? Are they so ill-conceived as to offend fundamental tenets of our legal system? What do they say about the way our courts are functioning and whether they are meeting the needs of the public?

It is true that Kapeliuk and Klement are not the first to observe the differential benefits of party rulemaking for the parties and for the public that can arise in certain disputes. Party rulemaking can, for example, deprive the public of useful precedents or interfere with the courts’ ability to support fundamental social policies. By emphasizing the need to consider the timing of party rulemaking, Kapeliuk and Klement underscore how party rulemaking can have a transformative effect on civil litigation, and the way it supports or undermines the incentives to perform contracts and shapes the parties’ responses to potential disputes.

In this sense, Kapeliuk and Klement show not only how ex ante contracting for disputes can change the litigation game, but also how, in view of its transformative potential on the role of courts, it ought to be taken seriously—as more than a game. Well worth reading—well played!


Opinions, Briefs, And Computers—Oh My!

Research on the federal courts often follows this basic pattern: 1) identify issue (often made salient because of a recent Supreme Court case); 2) analyze federal court opinions for cases relevant to that issue; and 3) write article. This process, which we might label “issue analysis,” has served, and will continue to serve, legal scholarship well. Issue analysis is very effective for evaluating and analyzing court handling of specific doctrines, statutes, and regulations. Less frequently, federal courts scholarship seeks to identify larger, often comprehensive, theories of how judges and courts behave, which we might label “behavior analysis.” Such endeavors can apply to the hundreds of thousands cases filed each year in the federal courts. As a result, researchers face significant problems not normally associated with issue analysis, including cherry picking suitable examples, confirmation bias, and inadequate treatment of contrary evidence. Empirical methods are a logical way to deal with those concerns, but publicly available datasets are few and human coding of legal documents can be extremely labor intensive and costly. Thankfully, these problems can be significantly curtailed by using computer-aided content analysis to evaluate large pools of cases.

Chad Oldfather, Joseph Bockhorst, and Brian Dimmer have published a wonderful article, Triangulating Judicial Responsiveness: Automated Content Analysis, Judicial Opinions, and the Methodology of Legal Scholarship, which illustrates exactly how such research should proceed. The research question they address is a very basic one: how do party briefs affect judicial opinions? One might think such a core question of litigation would have been addressed by numerous studies. However, as the authors rightly explain, the methods for addressing such a question are prone to the classic concerns with using behavior analysis. As a result, there simply has been no tested general theory of how briefs affect judicial opinion writing.

I highlight this article not necessarily for its actual findings, but instead for its careful and thoughtful methodology. Indeed, the authors recognize that the study’s findings are not their key contribution—the methods of study are where readers can learn the most. The article is more a proof of concept than an attempt to definitively answer the very broad research question addressed. Because of the sheer volume of courts-related articles published every year, it is quite easy to miss reading an innovative and significant scholarly work with modest conclusions but innovative, sophisticated, and well-executed methods. This piece should not be overlooked.

The authors study a sample of opinions and party briefs from 2004 in the United States Court of Appeals for the First Circuit to determine how the briefs influence the ultimate written opinions (what the authors refer to as “judicial responsiveness”). Initially, the study uses conventional human analysis of briefs and opinions to assess judicial responsiveness, with the researcher coding each case as either being “strongly responsive,” “weakly responsive,” or “nonresponsive.” Although extensive instructions and examples are given to aid human coders in differentiating the three variable options, the shortcomings of such a technique should be obvious.

They do not stop with reductionist human coding in their efforts to analyze the studied opinions and briefs, however. Where the study gets far more interesting is in the use of computer-aided content analysis to determine the similarity between party briefs and written opinions, analyzing the language and words in a document or set of documents. The judicial responsiveness measure is assessed along two dimensions using automated content analysis: word use similarity and citation utilization. The usage of similar words is determined using the cosine similarity method, which is often coded into plagiarism detection programs, allowing them to compare the word similarity between each party brief and the written opinion. For citation analysis, the computer determines the percentage of brief citations that are used in the judicial opinion. This type of analysis simply cannot be completed by humans. Word similarity in particular is essentially a machine-only enterprise. Citation tracking might be accomplished with a significant labor force, but such resources are rarely available to legal academics.

With three different coding techniques to deploy—one human, two computer—the authors find some very interesting data to compare. Significantly, both computer-coding techniques are strongly correlated with the human coding, despite the relatively small sample size. A reader might be skeptical of potentially subjective human coding, the difficult-to-comprehend cosine similarity method, or the limited value of citation appearance. However, that all three measures are correlated is in itself a remarkable finding. Regardless of the measure used, the results of judicial responsiveness are statistically similar. The correlation supports the statistical validity of each measure independently and strengthens the argument for using them collectively.

This is not to say that the three measures produce identical results. Indeed, the differences identified between the three techniques provide insight into potential refinements of content analysis techniques and directions for future work. The lower rate of judicial responsiveness based upon citation use highlights the limited value of such a measure and ultimately supports a hybrid scoring system. Further, the authors consider the potential value of more sophisticated learning computer algorithms that can increase the validity of the computer measures beyond the basic techniques they presently use.

Scholars might be confused, intimidated, or wary of computer-aided content analysis. However, this article illustrates exactly why academics should make greater efforts to engage with and to understand such tools. By using content analysis methods, a researcher can more reliably and validly answer research questions potentially covering large numbers of legal documents. The article is hopefully the first in a long line of studies that will use automated techniques to study the relationships among the varied pieces of paper that we, as legal academics, make the objects of our professional inquiry.

The careful and thoughtful use of these new tools in Triangulating Judicial Responsiveness provides a model for such future research.


The Truth About Empathy

Thomas B. Colby, In Defense of Judicial Empathy, 96 Minn. L. Rev. 1944 (2012).

President Obama was widely criticized when he stated that he viewed a “quality of empathy, of understanding and identifying with people’s hopes and struggles” as an essential attribute in a judge, one that he would look for in choosing Supreme Court justices and other federal judges. Conservative commentators attacked this as endorsing naked judicial activism, a call for more liberal judges running amok and deciding cases to suit their political preferences in favor of the “little guy” rather than based on “law.” Neither of the President’s Supreme Court nominees would openly endorse the empathy standard in their confirmation hearings, although Justice Kagan subtly defended the underlying idea, if not the terminology, at her confirmation hearing. And Republican members of Congress used the President’s words (or at least their (mis)interpretations of those words) to oppose his Supreme Court nominees.

With In Defense of Judicial Empathy, Thomas Colby undertakes the first comprehensive scholarly treatment and defense of the President’s arguments and of empathy as an essential and unavoidable component of good judicial decisionmaking. And he ties the centrality of empathy to broader debates over the judicial role.

Colby begins by identifying and correcting the arguable cause of much of the controversy over the President’s standard—the confusion between empathy and sympathy. While empathy is a relatively new word of contested meaning, Colby adopts the dictionary definition: the “action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner.” Empathy is the cognitive skill of being able to see a situation from someone else’s perspective and to understand how and why someone sees, feels, and acts as they do. That is fundamentally different than sympathy, through which a person is affected by and acts in support of the feelings of another. As Colby puts it, sympathy is feeling for someone; empathy is feeling with someone.

Importantly, the ability to empathize—to experience, feel, and understand what another person experiences, feels, and understands—says nothing about what a judge should do with the information gained from her empathy. Empathy does not tell a judge to decide in a particular way. Contra many conservative political and scholarly critics, Colby insists that empathy merely guides judges in hearing and thinking about the case, not in deciding for or against any party. A judge can exercise empathy even if she ultimately decides not to adopt the position of the party into whose shoes she steps. Although Colby does not use the term, we can understand empathy as procedural—as governing the “manner and means” by which judges hear and think about cases. Empathy enables judges to identify, understand, and consider all the variables that should be analyzed, including the positions, feelings, and experiences of the parties; it says nothing about the outcome of that analysis.

Colby does recognize a “nontrivial concern” that a judge’s empathy may turn into sympathy, prompting her to rule based on feeling sorry for one side of a dispute (particularly the poor, disadvantaged, or underdog side) rather than on the law. But this is not solely a liberal danger, as arguably demonstrated by Justice Alito’s dissents in Snyder v. Phelps (invalidating civil judgment against public protests outside funeral) and United States v. Stevens (invalidating federal statute prohibiting video depictions of animal cruelty). Alito alone argued that expression lost its constitutional protection when it was aimed at the grieving parents of a deceased soldier “at a time of acute emotional vulnerability” or when the expression depicted animals experiencing “excruciating pain.” In his view, the harm and pain suffered by mourners and kittens, for which he clearly felt an affinity, trumped standard free-speech principles. In any event, that empathy might bleed into sympathy cannot mean that a judge should not possess or exercise empathy, only that she should (and likely will) strive to be aware of how one can lead to the other and to ensure that her empathy does not lead to subconscious sympathy-based decisionmaking.

Having properly defined empathy and disaggregated it from sympathy, Colby then argues that, rather than being undermined by empathy, a judicial system in fact cannot properly function without it. Empathy is essential in an adversary system, where each side is given an opportunity to present its best legal and factual arguments, and the judge is charged with selecting the better of those arguments. For that opportunity to be meaningful, the judge must be truly capable of hearing, listening to, and understanding the legal and factual arguments that each party presents. That empathy is many-sided. A judge should hear and understand the feelings, experiences, and needs of all parties, not just one. And it is politically neutrala liberal judge cannot only hear and understand the “little guy” and ignore the positions of corporate actors, the government, or crime victims. Empathy, Colby insists, is the “capacity to understand the perspective and feel the emotions of othersall others.” (emphasis in original).

Finally, Colby ties empathy to the debate over the “judge-as-umpire” analogy that Chief Justice Roberts famously offered at his confirmation hearing. Colby rejects the analogy as “bankrupt,” because it erroneously assumes that law is as determinate and capable of producing objectively correct answers as whether a pitch is a ball or strike.

But it isn’t, as Colby demonstrates through a range of constitutional and subconstitutional doctrines defined by multi-factor balancing tests. These require a judge to weigh competing interests and concerns, which she only can do if she genuinely understands those interests and concerns through an exercise of empathy. Judges routinely predict future behavior, balance competing individual interests, and apply “reasonable person” tests that “require judges to assume the perspective of various actors to determine whether their behavior was objectively reasonable.” A court can properly decide whether a school principal acted reasonably in strip-searching a female student only if the justices truly can put themselves in the shoes of both the principal and the student and understand the concerns that motivate and affect each. A court can only determine whether a law has a rational basis if the judge understands the positions of all involvedthe legislators trying to solve problems with imperfect lines, the people sought to be protected, and the people subject to the law’s regulatory reach. A court can only decide whether an objective observer would view a religious display as endorsing religion if the judge can put herself in that observer’s shoes.

Unlike baseball, law requires judging. And judging requires the ability to truly understand and process the arguments, positions, and feelings presented by the parties. Judging, in other words, is an exercise in empathy, and judges simply cannot perform their functions without it. As Colby persuasively shows, our legal, political, and academic discourse about courts and judicial decisionmaking will be better served if everyone understands this.


How Should Judges Spend Their Time?

Marin K. Levy, Judicial Attention as a Scarce Resource: A Preliminary Defense of How Judges Allocate Time Across Cases in the Federal Courts of Appeals, 81 Geo. Wash. L. Rev. 401 (2013), available at SSRN.

The federal appellate caseload has grown from 73 cases per active judgeship in 1950 to 330 cases today. Scholars have criticized the heavy caseload and the techniques that appellate judges have developed to manage it, such as using staff attorneys and issuing unpublished opinions. Such techniques, they have argued, create a “bifurcated” system of justice with a “separate and unequal” track for certain types of cases, such as immigration cases, cases with pro se parties, social security cases, and certain types of criminal cases. They advocate systemic reforms that would alleviate this disparate treatment.

In her recent article, Judicial Attention as a Scarce Resource, Marin Levy undertakes a different task from that of scholars calling for systemic reform. While she does not necessarily dispute the need for such reform, she takes as her premise that, in the short term, the judicial system will remain relatively unchanged. Her project, then, is to examine how well judges are working within current constraints and to consider how their work might be improved.

To that end, Levy’s thoughtful and pragmatic article applies resource allocation theory to judicial caseloads. She characterizes judicial attention as a scarce resource and suggests that we should evaluate whether that resource is allocated in a desirable way by examining the judicial “outputs” of error correction and law development. While acknowledging other possible judicial outputs, such as cost containment and institutional legitimacy, Levy sets these to one side because they are less widely accepted by judges and scholars.

Examining the appellate courts’ case management practices, Levy concludes that the courts, by and large, do a pretty good job of maximizing the two outputs of error correction and law development given the reality of scarce attentional resources. Regarding the error correction output, she concludes that the categories of cases typically routed to a nonargument track are either less likely to contain errors (for example, because they have already been reviewed by a body with expertise, such as the Board of Immigration Appeals) or are more likely to be frivolous in the first place (for example, many pro se cases). And with respect to the law development output, she explains that the categories of cases least likely to require law development are those in which the law is already relatively clear and which tend not to present novel issues. Perhaps unsurprisingly, the categories of cases that require less attention to maximize each output tend to overlap.

There is a great deal to admire in Levy’s work. The lens of resource allocation, coupled with the notion of judicial attention as a scarce resource, contributes significantly to the understanding of the functioning of the judiciary. Levy’s careful cataloging of the factors that make cases worthy of enhanced or diminished review offers a plausible defense of the way that federal appellate judges currently spend their time. She is appropriately modest about this conclusion, acknowledging that it rests on certain assumptions and suggesting ways that those assumptions might be tested. And the article’s conclusion offers a balanced and pragmatic discussion of how judges might improve their review of cases that, realistically, will receive less attention under current conditions.

Levy’s resource allocation model provides a useful foundation for a more nuanced examination of the work of appellate courts. Does it matter, for example, if the appellate outputs are not neatly segregable from one another? A court that goes out of its way to correct a perceived error may actually develop bad law—we are all familiar with the truism that hard cases make bad law—and a court that develops bad law may in turn be perceived as an illegitimate institution. Further refinement of Levy’s model might take into account some of the ways in which these outputs overlap, at some times reinforcing one another’s effects and at others negating those effects.

We might also wish to take more explicit measures to ensure that focusing on resource allocation does not mask other systemic problems. Consider the much-criticized backlog of immigration cases, many of which, scholars have argued, deal with laws that are unwise, disparately enforced, or both. If the federal courts allocate fewer attentional resources to immigration cases—perhaps appropriately, under Levy’s model—they may fail to make the public aware of both the type and magnitude of “bad” immigration law. Indeed, the problem extends further: the buffer of staff attorneys and the insulation from public scrutiny that unpublished opinions provide may make it easier for judges themselves to ignore the concerns that disfavored categories of cases tend to raise. Ideally, the resource allocation model would expose rather than hide these concerns. Perhaps the model could be expanded to recognize that an additional valuable output of appellate courts is to inform the public about such systemic problems.

These preliminary thoughts as to how Levy’s model might be refined are really a testament to the overall value of the model itself. The gift of her article is that it gives us a way to think about how judicial output should look now, in our current, imperfect system. In so doing, she lays the groundwork for fruitful exploration of these important and timely issues, while at the same time leaving open the door for us to think about what we might wish to keep from the current system as, perhaps, we evolve toward a better one.


Back to the Future

Robert L. Jones, Lessons from a Lost Constitution, 27 J.L. & Pol. 459 (2012), available at SSRN.

Ian Ayres and Joe Bankman begin one of their articles with a Dilbert cartoon (reproduced below). They use the cartoon to show that firm insiders may use nonpublic information to trade not only their own company stock, but the stock of competitors, rivals, and suppliers. Ayres and Bankman ultimately conclude that insider trading of such stock substitutes is inefficient and should be prohibited, but they acknowledge the argument that insider trading may “produce more accurate stock prices.” Presumably one could learn a lot about a company by paying attention to how its insiders treat substitutes for the company’s stock.


DILBERT ©1996 Scott Adams. Used By permission of UNIVERSAL UCLICK. All rights reserved.

Robert L. Jones has written an excellent article that examines one insider’s views of a substitute for judicial review under the Constitution–James Madison, who is arguably the “father” of the Constitution. (P. 5.) The substitute was a proposed Council of Revision, endorsed by Madison as part of the Virginia Plan. It was ultimately rejected at the Constitutional Convention, but Jones argues that one can learn a great deal about our current practice of judicial review by examining the reasons Madison preferred it over the type of judicial review we have today.

As proposed, the Council of Revision granted a qualified veto over all legislation passed by Congress to the President and “a convenient number of the National Judiciary,” who would all come from the Supreme Court. (P. 28.) The veto was qualified because a supermajority of Congress could override it. A Council of Revision was by no means unprecedented. New York had established one at the time of the Convention, and the proposed Council was most likely modeled on the British Privy Council. (P. 28 n.105.) The proposed Council was not limited to reviewing the constitutionality of enacted legislation; as Jones makes clear, Madison contemplated that it would veto legislation on both policy and constitutional grounds.

Why did Madison prefer a Council of Revision to judicial review? Here Jones notes the distinction that Madison made between “democratic legitimacy on the one hand and rationality and deliberation on the other.” (P. 20.) Like his contemporaries, Madison was concerned with the costs of rule by popular sentiment. In his view, the main defect of the Articles of Confederation was that it allowed the states to engage in conduct that, while popular, produced self-defeating results. Thus, he included and endorsed features in the Constitution that checked popular sentiment, such as establishing a representative government where the representatives would, ideally, “lead and shape, rather than simply slavishly follow, popular sentiment.” (Id.) Moreover, and as made famous by Federalist No. 10, Madison believed that the vast extent of the United States would make it hard for any one faction to come to power and subordinate the interests of others.

Nevertheless, Madison was concerned with the “vortex” of power that Congress could become, and he was skeptical that a veto by the president alone would ever be exercised. He thus concluded that granting a qualified veto to two branches – the executive and some portion of the judiciary – would make it easier for both together to wield a veto that would be seen as legitimate by the people.

When the Constitutional Convention ultimately rejected the Council, Madison threw his support behind a Bill of Rights to supplement judicial review. This seems odd because Madison had previously opposed a Bill of Rights out of a fear that the Rights would be unduly narrowed through judicial interpretation.

Why did he change his mind? There are a number of cynical reasons proposed by historians – to circumvent more radical changes proposed by the antifederalists or to win a Congressional seat. But Jones argues that Madison saw the Bill of Rights as a way to lend popular support to judicial review. Madison surmised that a Bill of Rights would be internalized by the people, who would then view the judiciary as “the guardian of those rights.” (P. 99.) Although Madison did not believe that judicial review coupled with a Bill of Rights would gain the same popular support as a Council of Revision, he was enough of a pragmatist to realize that it was the best he could do.

This is a wonderful article and a joy to read. Its best feature, in my view, is how Jones uses this history of Madison’s failed attempt to enact a Council of Revision. One could imagine a legal scholar using this history to support an originalist argument about the nature of judicial review. But Jones avoids this trap, perhaps recognizing that the view of one founding father (no matter how important) is probably too slender a reed to rest any inferences about what all the founding fathers intended.

Instead, Jones considers the normative lessons of Madison’s failed attempt. Jones suggests that we could learn a great deal from Madison’s pragmatic concerns about democracy. The biggest lesson is that we should not equate democracy with majoritarian rule. Madison’s proposed Council of Revision, which would have been able to veto legislation on policy grounds, demonstrates that Madison did not view the judiciary as providing an antidemocratic check. Instead, he viewed the judiciary as performing a crucial democratic function by introducing deliberation and rationality to lawmaking, separated from the passions that drive normal politics. The judiciary was the superego to the legislature’s id. In fact, Madison envisioned that the id still could trump the superego because the Council’s veto could be overturned by a supermajority in Congress.

Moreover, and as Jones discusses, Madison’s proposal suggests that the countermajoritarian difficulty should mean something different entirely. Madison did not believe that the judiciary lacked a democratic justification to make decisions that countered the majority because, again, he did not equate democracy with majoritarian rule. Instead, he was concerned with the all-too-human side of judging – that a judge will be too weak-willed to stand up to public sentiment, no matter how wrongheaded that sentiment may be. For Madison, the difficulty was setting up a governmental structure in which the majority will not riot when judges do their job. This difficulty is not unlike the difficulty of getting yourself to stick to a diet when confronted with a donut. Self-governance, both at an individual level and at a societal level, requires one to think of clever ways to get oneself to do the right thing.

Certainly some of Madison’s views have not survived the test of time. Congress is the least popular branch, not the most popular. But Madison was probably right about the problem of getting the American people to eat their vegetables, so to speak, and it would be wise for us to take these concerns more seriously. If anything, Jones’ article reminds us of the importance of listening to our elders. They know a thing or two.


Celebrating Federal Civil Rulemaking

Lonny Hoffman, Rulemaking in the Age of Twombly and Iqbal, U.C. Davis L. Rev. (forthcoming, 2013) available at SSRN.

The Federal Rules of Civil Procedure are 75 years old this year. Imagine a fete thrown in their honor-mini rule books as party favors, balloons emblazoned with Rule numbers 1-86, and a cake decorated with the words “Just, Speedy, and Inexpensive.”  If there ever where such a party, Lonny Hoffman’s article, Rulemaking in the Age of Twombly and Iqbal, should be the opening toast.  No, his article does not begin with a pithy joke; although, that might be fun. What it does is address the federal civil rulemaking process, an important — but often less discussed — aspect of the civil rules.

Hoffman’s article uses Rule 8’s pleading standard and the Supreme Court’s decisions in Bell Atlantic Corp. v. Twombly and Ashcroft v. Iqbal as an entry point for his discussion of the federal civil rulemaking process.  First, he provides a thorough historical account of Rule 8.  He relies on primary source material and weaves a rich recounting of the original rulemakers’ Rule 8 deliberations.  The original civil rulemaking committee made a choice in Rule 8 by using the word “claim” in the text as opposed to “fact.”  It chose this language for maximum flexibility and minimum technical wrangling.  This much we already knew. But Hoffman’s account reminds us of Rule 8’s origin before summarizing how the civil rulemaking committee treated Rule 8 over time.  What his account tells us is that the rulemakers had multiple occasions to reconsider the policy choices made in the original Rule 8.  He documents how rulemakers confirmed Rule 8 again and again from the 1970s until just before Twombly was decided in 2007.  While the reasoning of each committee varied a bit — some citing the practical difficulty of amending the rule, some questioning the empirical basis for changing the rule, and some arguing that heightened pleading would be antithetical to the rule’s purpose — it is safe to say that, overall, the rulemakers actively decided to keep Rule 8 as it was.

This all changed when Twombly and Iqbal entered the picture.  This is where Hoffman makes a key contribution.  In contrast to the rulemakers’ deliberative rejection of proposed changes to Rule 8 in the pre-Twombly/Iqbal world, Hoffman’s account shows a rulemaking body whose recent behavior is quite different.  He describes three overlapping cycles of response.  First, the rulemakers proceeded with caution because there was little information about how these cases would affect practice.  Second, along with this initial caution, the rulemakers articulated their belief that Twombly and Iqbal would not have much of a practical impact.  Empirical research conducted by the committee and the Federal Judicial Center further girded this status-quo reaction.  Finally, the rulemakers repeatedly stated that even if rule change was warranted, any such change would be futile.  After all, the Supreme Court decided Twombly and Iqbal.  It seemed unlikely that the Court would approve any rule change to the contrary.

Hoffman takes each of these responses in turn.  He agrees that the committee needed to wait and study before making any changes to the rules.  However, once those studies were conducted, Hoffman challenges how the rulemakers understood the information being presented to them.  Hoffman provides an overview of the major FJC study relied on by the committee.  Building on his earlier critique of the study, Hoffman essentially argues that the results of this study were misunderstood — perhaps even over-understood — by the rulemakers.  Hoffman is not flippant in this assessment, nor is he arguing that the rulemakers were not thoughtful and careful.  Hoffman’s point is that the information presented to the committee had its limitations, and it is not clear from the deliberations that the rulemakers appreciated these limits.  For example, the study stated that there was no “statistically significant” increase in the likelihood that a motion to dismiss would be granted after Iqbal.  Hoffman argues that this statement, without the proper context, could be misunderstood by rulemakers as proving that Twombly and Iqbal were not responsible for the notable increase in grant rates.  Without a background in statistics or the proper context and training for the study’s findings, Hoffman argues that the rulemakers were not equipped to fully appreciate the results.

These statistical blind spots affect policy decisions.  Hoffman argues that if the rulemakers worked directly with the FJC (and other researchers) to frame the research, it would force the rulemakers to carefully focus on the normative implications of both the study and its results.  For instance, if the rulemakers knew that the Twombly/Iqbal study had the potential for producing a false positive error — showing that Twombly and Iqbal were not responsible for the higher dismissal rates when in fact they might have been — the rulemakers might have made a different decision in light of this information.  Given the committee’s pre-Twombly/Iqbal policy of concern that heightened pleading would negatively impact access to justice, rulemakers might have understood the study as showing the cases were possibly responsible for the increase in dismissal rates.  Hoffman’s point is that the research and policy trade-offs should not be isolated from one another.

All of this leads to Hoffman’s final argument.  If the rulemakers approached the empirical findings properly and if they continued to hold their past judgment that Rule 8 was fine as it was, then they might be compelled to amend Rule 8 to overrule Twombly/Iqbal.  Hoffman contends that the rulemakers’ concern about futility — while a fair one — should not stop them from trying to change the rule.  There is value in the process, and if the Supreme Court stops a change to Rule 8, then that signals something to the public and to Congress.  The chance that the Court will not like a change, Hoffman argues, should not prevent rulemakers from doing what they think is appropriate.  And to the extent that the Court’s involvement in the process creates a barrier to reform, Hoffman argues that perhaps the time has come to take the Supreme Court completely out of the process and let the Judicial Conference be the sole stop-gap between the rules committee and Congress.  Either way, Hoffman asserts that the committee should act.  Even if it does not get an amendment passed, the committee would still serve its laudable purpose by trying.

In all, Hoffman’s account of federal civil rulemaking is reminiscent of a best friend toasting at his buddy’s anniversary bash.  There are equal parts celebration, reflection, and optimism.  He celebrates the rulemaking process for standing the test of time.  Yet he delicately, but frankly, articulates the bumps in the road.  Hoffman gives advice about how the process might improve and closes his toast with an inspiring call to action.  After reading his article, we should all lift a glass, wish the civil rulemaking process well, and take a celebratory sip of champagne.


Adequacy and the Attorney General

• Margaret H. Lemos, Aggregate Litigation Goes Public: Representative Suits by State Attorneys General, 126 Harv. L. Rev. 486 (2012).
• Deborah R. Hensler, Goldilocks and the Class Action, 126 Harv. L. Rev. F. 56 (2012).

Maggie Lemos’s valuable article tackles one of the hot issues in aggregate litigation: a government (typically acting through its attorney general) using parens patriae suits to vindicate the rights of its citizens.  As I described in my last Jotwell post, access to justice in a mass society is the central civil-justice issue of our day.  Individual litigation of mass-injury claims is a luxury that neither litigants nor the court system can typically afford.  Class actions are shriveling as a realistic alternative in many instances.  Non-class aggregate litigation is infected with its own problems, as the ALI’s recent Principles of the Law of Aggregation shows.  And contracts of adhesion increasingly shunt victims into individual arbitration processes that provide little realistic opportunity for relief — and no opportunity for judicial resolution.

Into this harsh landscape enters the parens patriae action, which has emerged as the newest academic darling with the potential to provide victims of mass injury a measure of justice.  In these actions, the attorney general sues on behalf of those citizens allegedly injured by the defendants’ conduct.  Such a suit ensures a measure of deterrence.  If the recovery occurs and the attorney general establishes a fund against which injured citizens can claim, the suit also results in a modicum of compensation.  Because the suits are controlled by a public official, they also (in theory) come closer to achieving the optimal level of regulatory response, while avoiding the large fees, blackmail settlements, and other agency costs that so often give class-action and other aggregate litigation a bad name.

Sounds great, right?  Not so fast.  Turning the critiques of other forms of aggregate litigation around on parens patriae litigation, Lemos shows that the picture is not as rosy as it seems.  With a strong command of the class-action and aggregate-litigation literature, she explores the various agency costs traditionally associated with private mass litigation, and then demonstrates that these problems (conflicts of interest, lack of client monitoring and control, asymmetric stakes and resources, and inadequate settlements) also infect cases brought by attorneys general.  Attorneys general have their own political interests in prosecuting the claims; their offices are often underfunded; the citizens have little realistic control over litigation decisions; and inadequate settlements can therefore be expected.

Given the risk of imperfect representation, Lemos connects parens patriae litigation brought by attorneys general and class actions brought by class representatives.  In particular, she argues that a parens patriae suit should have preclusive effect only when the attorney general is an adequate representative of her citizens.  With Hansberry v. Lee and its progeny, the Supreme Court established adequate representation as the constitutional floor required to accord preclusive effect in class actions.  By analogy, Lemos argues that giving preclusive effect to a parens patriae judgment or settlement is unconstitutional unless the attorney general meets the same due-process minimum.  This fairly unassailable logic leads to one of two conclusions: either a class-action-style guarantee of adequate representation must be imported into the law of parens patriae litigation, or citizens must be free to pursue private litigation (whether individual, class-wide, or aggregate) without being bound by the result achieved by the attorney general on their behalf.

Lemos prefers the latter solution because it better accommodates the government’s interest in suing to vindicate regulatory and political objectives with victims’ distinct interests in pressing their own claims.  She recognizes that this solution is not ideal, in part because it exposes defendants to “double dip” liability should they first pay some money to the attorney general and then pay more to the victims.  But, as she notes correctly, double dipping is unlikely to be a significant problem for many of the small-stakes cases in which parens patriae actions are filed. And in any event, the court can avoid the issue by finding (in appropriate cases) that the attorney general is an adequate representative, and then allowing citizens who are disappointed with a proposed parens patriae settlement to opt out of the case.

For anyone wishing to engage the issues fully, Deborah Hensler’s short online response also merits close reading. Hensler points out that “[u]sing private litigation to achieve public policy goals raises a fundamental question about the proper balance between public and private law in democratic societies.”  She raises the importance of empirical data and case studies, with which Lemos’s more theoretical piece does not engage, in evaluating both proposals to change parens patriae practice and claims about the adequacy of an attorney general’s representation.  And Hensler suggests that any critique of the present state of parens patriae actions should account for the bleak reality that no method of delivering justice to large numbers of relatively powerless victims — whether a class action, a traditional parens patriae action, or a parens patriae action reformed along the lines that Lemos suggests — is, in the immortal words of Goldilocks, “just right.”  We must still do the best we can to cobble together some combination of concededly imperfect mechanisms to keep the metaphorical bears in check.

Lemos is correct to critique state and lower federal court decisions suggesting that parens patriae actions can bar separate claims by citizens without regard to the quality of the attorney general’s representation.  That case law must be crazy.  Anyone who has read the line of cases from Hansberry through Martin v. Wilks, to Taylor v. Sturgell, knows that no court today could so hold and get away with it.  Parens patriae actions do not have a binding effect on citizens whose attorney general does not adequately represent them.  End of story.  Because these parens patriae actions lack binding effect, attorneys general are not agents of their citizens.  Therefore, Lemos’s concern about the agency costs of parens patriae actions in which representation is inadequate strikes me as misplaced.

The real concern is double dipping.  Because a parens patriae suit in which the representation is inadequate does not bind citizens in subsequent litigation, a defendant might in theory end up paying both the government and the victims for the same harm.  That problem is not unique to parens patriae litigation; it also arises in other situations.  For instance, a class member who fails to receive adequate compensation in one forum may bring suit in a foreign forum.  To the extent that double dipping is an observed phenomenon (and here Hensler’s call for empirical evidence is especially salient), it is far more controllable in the domestic than in the transnational context: parens patriae and private suits can be consolidated, or the amounts paid to a claimant in the parens patriae suit can be deducted from that claimant’s award in the private litigation.  That said, crafting simple and workable solutions to prevent double dipping is a challenge that merits attention as we cobble together a mélange of imperfect responses to mass injury.

A deeper question is the meaning of “adequacy of representation” in the parens patriae context.  If individual litigation is a fond luxury, especially in small-claim consumer cases that have been the traditional grist for the parens patriae mill, we need to accept the reality that the delivery of justice to victims of mass injury inevitably requires some class-action, aggregate, or representative process(es).  (The only alternative, as Hensler aptly puts it, is to “leave the marketplace to the bears.”)  In assembling individual claims into a larger group, however, conflicts among individuals in the group are inevitable.  Hansberry famously held that a class representative could not adequately represent class members whose interests were in conflict.  That “conflict of interest” trope has dominated our discussion of inadequate representation ever since.  It’s time to change our thinking.  We cannot simultaneously maintain both a “conflict of interest” view of inadequate representation and a belief that a class-wide or representative process can ever bind absent plaintiffs.  Something has to give.  If we care about the delivery of justice to victims whose economic reality is the impossibility of individual suit, we must come to a different understanding of adequate representation.

In parens patriae suits, therefore, the important question is not whether we can tolerate conflicts between the attorney general and the represented citizens, but how great the disparity in interest must be before the parens patriae suit loses its preclusive effect.  The answer to that question is complicated, dependent in part on the other realistic options that victims have for enforcing their rights.  Admitting that there are significant agency costs when attorneys general represent citizens is the starting point of the analysis, not the conclusion.  Due process is often sensitive to context, and sometimes even a quarter of a loaf is better than none.


Fixing Personal Jurisdiction

Stephen E. Sachs, How Congress Should Fix Personal Jurisdiction, Duke Univ. Working Paper (2013).

Who among us has not relished the extraordinary gift the Supreme Court gave to civil procedure teachers in the form of J. McIntyre Machinery, Ltd. v. Nicastro, allowing professors to punctuate the already absurd personal jurisdiction case line with the story of the unlucky Mr. Nicastro (he who lost four fingers to a metal shearing machine in New Jersey), with nary a place to sue? (And, no doubt reserving that one remaining finger for . . . personal jurisdiction jurisprudence.) Moreover, to ensure us a near-perfect teaching vehicle, the Court — as Professor Stephen E. Sachs notes in the wonderfully entertaining and thought-provoking How Congress Should Fix Personal Jurisdiction — “bogged down in an incoherent three-way split.”

Rather than make a futile attempt to make sense out of McIntyre, or to rationalize the mess away, Professor Sachs whole-heartedly forges into the personal jurisdiction thicket (which he labels a “dismal swamp”) with his own solution. Actually, an entire array of solutions. Sachs takes up McIntyre’s invitation to Congress to provide a federal forum for cases like Nicastro’s, and he sets forth a detailed federal statutory scheme for authorizing a federal forum based on existing venue rules. In particular, he is keen on securing federal forums to enable plaintiffs such as Nicastro to sue multinational corporations, such as McIntyre, that might otherwise evade responsibility for injuries to U.S. citizens because of existing state personal jurisdiction doctrine. Sachs notes that his proposal to create federal personal jurisdiction based on a venue model is not new, but suggests that other such attempts have been flawed in key respects (which he aims to rectify).

Sachs begins by arguing that those who would reform personal jurisdiction with an expedient doctrine have been looking in the wrong place (the Due Process Clause). Rather, the most plausible rules must be the product of legislative choice. As a threshold matter, Sachs boldly suggests that the solution to the personal jurisdiction mess begins with re-conceptualizing the problem as a question of not where a defendant is subject to suit, but who may hear it: who will determine the parties’ rights and liabilities and set the rules that govern the dispute.

Sachs’s paper endorses a system of nationwide federal personal jurisdiction that effectively erases state lines. Pursuant to his proposed statutory scheme, a district court could exert personal jurisdiction over a defendant so long as there were adequate contacts between the defendant and the United States as a whole. The location of the courthouse (which he claims is irrelevant for constitutional purposes) can be determined through familiar venue considerations of fairness to the parties and the witnesses. Applying these concepts, he offers examples of how his scheme would work and suggests how Nicastro could have pursued McIntyre under his rules.

After setting forth the justifications for his proposed new personal jurisdiction rules ― exploring why a federal forum makes sense ― Sachs acknowledges that creating new personal jurisdiction rules modeled on the venue statutes involves a more complicated problem. (“Of course,” he notes, “the answer isn’t that simple.”). He recognizes that federal litigation, “[i]f not a seamless web,” is “at least a giant tangle, in which pulling on one thread unravels other parts of the system.” And so, as soon as Professor Sachs has pulled at his initial venue thread, his entire federal jurisdictional skein begins to unravel (delightfully so). He tackles, among other problems and considerations, the implications of his proposal for Due Process (would it be constitutional?); the Erie doctrine (consistency between federal and state courts); and common sense (a refreshing and novel approach from an academic). Having canvassed large-scale issues and possible alternative solutions, Sachs returns to his own venue-based proposal, suggesting that getting federal courts out of the doctrinal mess of state court personal jurisdiction requires “thoroughgoing and careful revisions to the U.S. Code.”

It is at this point that Sachs’s paper becomes a veritable procedural tour de force. Beginning with his proposed nationwide personal jurisdiction concepts, Sachs suggests the need for changes to or creation of ― among many issues ― venue transfer rules, removal jurisdiction, the Van Dusen and Ferens rules on applicable law, appellate rules governing transfer denials, sanctions rules for unreasonable forum selection, and default rules for non-appearing defendants. For each of these inter-related procedural issues, Sachs proposes a statutory solution. This whirlwind tour through Title 28 of the U.S. Code and related doctrines is riveting in an “Oh-my-gosh” sort of way. And, toward the end, Sachs makes an obligatory nod to the Rules Enabling Act and potential rulemaking issues raised by his proposals.

Sachs’s article is entirely engaging because of the scope and sheer audacity of its recommendations. The paper is thought-provoking and provides a good vehicle for debate after studying McIntyre and the personal jurisdiction case line. Sachs’s writing style is delightful. He knows he has chewed off a very large mouthful, but he is humble and self-deprecating in the effort. No pomposity here. Sachs’s article is a work-in-progress in the finest tradition of rule reform, and it is entertaining to witness a young scholar become enmeshed in a knotty mess of inter-connected problems, once he has pulled a doctrinal thread and his holiday sweater unravels.


Seeking Accuracy in Aggregate Litigation

Courts and markets perceive mass tort victims from distinct perspectives that complicate aggregate litigation.  Before mass torts cause injuries, prospective victims often are fungible variables in an actuarial model.  Actors can foresee the possibility of negligence and identify groups who they might harm without knowing which specific members will incur losses.  For example, airlines know that planes may crash and pharmaceutical manufacturers know that drugs may cause adverse affects.  Yet even if the risks are known, injuries can occur at unpredictable times to unpredictable subsets of a risk-bearing population.  Even actors who intentionally violate the law by making fraudulent claims or adopting discriminatory policies often target demographics rather than individuals.  The anticipated victims are faceless statistics in a crowd.

But after tortious conduct causes injuries that generate litigation, victims generally have known identities.  Current rules governing civil adjudication enable defendants to both ignore and exploit these individual identities when proposing procedures for resolving plaintiffs’ claims.  A defendant that desires a global settlement (or global dismissal) can continue to view victims as an undifferentiated mass by making offers or arguments that are applicable to the entire group.  If these efforts fail, defendants often challenge further aggregate approaches to dispute resolution by contending that each alleged victim is a unique individual with a unique claim requiring its own day in court.  When judges accept these arguments, victims of wholesale injury become the potentially unwitting recipients of retail justice.  This claim-by-claim adjudication consumes scarce judicial resources, burdens litigants, and can produce inconsistent judgments in similar cases.

Several scholars have proposed to overcome traditional adjudication’s inefficiencies by allowing courts to treat post-conduct claimants the same way defendants treated pre-conduct potential victims: as a group—or collection of subgroups—rather than as distinct individuals.  Claim-by-claim assessments of liability, causation, and damages would yield to broadly applicable judgments based on statistical sampling.  For example, a court confronting 1000 similar claims for damages might select 10 for trial, average the results, and then extrapolate that average to the remaining 900 plaintiffs.  Taken to an extreme, this approach could permit an aggregated mass of plaintiffs to extract a lump-sum payment from the defendant that the court would then allocate among individual claimants.  The exaction would vindicate tort law’s goal of deterrence, while the distribution would support the law’s goals of compensation and equal treatment of similar claims.

Critics of sampling challenge its potential benefits by focusing on the rights of unconsenting litigants, the functional capabilities of courts, and formal constraints on adjudication.  One practical critique has been especially salient.  Even if aggregate treatment of discrete claims is theoretically defensible, particular procedures might generate inaccurate judgments that over- or under-deter and over- or under-compensate.  The aggregate value of all valid claims in a group is the sum of each valid claim.  Opponents of sampling fear that relying on generalizations from statistics instead of adjudicating each claim individually may produce an inaccurate sum.

Edward Cheng’s short, thought-provoking article on trial sampling addresses concerns about accuracy by rethinking the relationship between sample size and litigation outcomes.  The article begins by acknowledging a theoretical problem that confronts any effort to use accuracy as a criterion for evaluating litigation procedures.  Accuracy is a goal that most procedural architects embrace in the abstract, but that is difficult to define.  The concept of accuracy in tort adjudication is especially slippery because critical findings are subjective or indeterminate: liability might depend on an assessment of reasonableness, causation might hinge on an inquiry into probabilities, and damages might require quantification of non-monetary harms.  If these findings are not objective, it is difficult to contend that a given procedure for reaching them is inaccurate.  However, Cheng contends that when comparing procedures, one can assume that both are trying to “estimate” the same “abstract value.”  If traditional claim-by-claim analysis is the conventional gold standard for adjudication, then one can assess the accuracy of sampling by replicating the assumptions that courts make when assessing individual claims.  This approach might conclude that sampling is relatively accurate compared to accepted alternatives without needing to consider whether it is objectively accurate.

Cheng argues that conventional wisdom presumes that claim-by-claim adjudication must be relatively more accurate than sampling.  The intuition is that adjudicating an individual claim ensures accurate results for that claim, so adjudicating each claim within a group ensures accurate results for all claims.  In contrast, resolving every claim based on data about only a few requires extrapolations that invite errors.

He then challenges conventional wisdom by making three observations.  First, he posits that trials of individual claims are not as accurate as commentators believe because of “variability.”  Individual trial outcomes are partly a function of jury dynamics and lawyer behavior that varies from case to case and distorts outcomes.  In contrast, a sample of several cases can smooth out variability, leading to an average outcome that may better approximate the “accurate” result to which claim-by-claim adjudication aspires.

Second, he argues that when a jury considers only a single claim it lacks a frame of reference for calculating the claim’s value.  Judgments from a large number of juries therefore may include outliers that are unmoored to a plausible sense of what claims should be worth.  In contrast, if a single jury receives a sample of several cases, it can “calibrate” its assessment of each to the others.  This calibration in theory could pull potential outlier cases toward a more accurate baseline.

Finally, Cheng contends that the adversarial system promotes accuracy by encouraging non-random sampling of “extreme” cases selected by each party.  Assuming a normal distribution, the parties’ self-interested selection of cases on each tail of the curve enables the court to quickly find the mean with only a limited sample.  The combined implication of Cheng’s three observations is that trying a small number of claims for a modest cost can produce a more accurate result for all plaintiffs than trying every claim at a huge cost.

Caveats abound, which Cheng is careful to note.  His theoretical predictions hold only in “the right conditions.”  In particular, excessive heterogeneity or an asymmetrical distribution among the plaintiffs could introduce sampling errors that reduce accuracy.  The article therefore acknowledges that more work must be done to develop criteria for identifying classes of cases where sampling would be more accurate in the aggregate than trying every claim.  (Empirical analysis or controlled experiments might also help determine if juries actually behave as theory predicts.)  Moreover, the article notes that even if sampling produces accurate aggregate results, regressing to a mean rewards individual plaintiffs whose claims are relatively strong or prejudices individual plaintiffs whose claims are relatively weak.  These distributional concerns raise normative questions about whether an accurate sum justifies distortion of its component parts.

Given the caveats, the value of Cheng’s article lies in how its analysis of counter-intuitive assumptions can reshape debates about the optimal approach to resolving clusters of similar claims.  By suggesting that non-traditional procedures can enhance accuracy in certain scenarios, Cheng challenges an important defense of the prevailing claim-by-claim approach to adjudicating mass torts.  This defense resonates in contemporary discussions of civil procedure and helps to explain the Supreme Court’s recent acerbic rejection of “Trial by Formula” in Wal-Mart Stores, Inc. v. Dukes.

Formulas are easy targets for judicial scorn if they produce inaccurate results.  But if Cheng is correct that sampling is more accurate than traditional adjudication in some circumstances, then commentators must confront two difficult questions when such circumstances arise.  First, what would be the justification for preferring a system of claim-by-claim adjudication that spends more money than sampling to achieve less aggregate accuracy with more random variability?  Second, if current inefficient procedures are necessary to faithfully accommodate the demands of substantive tort law, should the law governing mass torts shift its focus from individual plaintiffs to groups of victims?  Both questions have many plausible answers that are beyond the scope of Cheng’s article.  But by challenging conventional wisdom, the article helps sharpen the questions, refine the discussion, and suggest lines of inquiry about how to enhance accuracy in litigation.