The Journal of Things We Like (Lots)
Select Page

Courts and markets perceive mass tort victims from distinct perspectives that complicate aggregate litigation.  Before mass torts cause injuries, prospective victims often are fungible variables in an actuarial model.  Actors can foresee the possibility of negligence and identify groups who they might harm without knowing which specific members will incur losses.  For example, airlines know that planes may crash and pharmaceutical manufacturers know that drugs may cause adverse affects.  Yet even if the risks are known, injuries can occur at unpredictable times to unpredictable subsets of a risk-bearing population.  Even actors who intentionally violate the law by making fraudulent claims or adopting discriminatory policies often target demographics rather than individuals.  The anticipated victims are faceless statistics in a crowd.

But after tortious conduct causes injuries that generate litigation, victims generally have known identities.  Current rules governing civil adjudication enable defendants to both ignore and exploit these individual identities when proposing procedures for resolving plaintiffs’ claims.  A defendant that desires a global settlement (or global dismissal) can continue to view victims as an undifferentiated mass by making offers or arguments that are applicable to the entire group.  If these efforts fail, defendants often challenge further aggregate approaches to dispute resolution by contending that each alleged victim is a unique individual with a unique claim requiring its own day in court.  When judges accept these arguments, victims of wholesale injury become the potentially unwitting recipients of retail justice.  This claim-by-claim adjudication consumes scarce judicial resources, burdens litigants, and can produce inconsistent judgments in similar cases.

Several scholars have proposed to overcome traditional adjudication’s inefficiencies by allowing courts to treat post-conduct claimants the same way defendants treated pre-conduct potential victims: as a group—or collection of subgroups—rather than as distinct individuals.  Claim-by-claim assessments of liability, causation, and damages would yield to broadly applicable judgments based on statistical sampling.  For example, a court confronting 1000 similar claims for damages might select 10 for trial, average the results, and then extrapolate that average to the remaining 900 plaintiffs.  Taken to an extreme, this approach could permit an aggregated mass of plaintiffs to extract a lump-sum payment from the defendant that the court would then allocate among individual claimants.  The exaction would vindicate tort law’s goal of deterrence, while the distribution would support the law’s goals of compensation and equal treatment of similar claims.

Critics of sampling challenge its potential benefits by focusing on the rights of unconsenting litigants, the functional capabilities of courts, and formal constraints on adjudication.  One practical critique has been especially salient.  Even if aggregate treatment of discrete claims is theoretically defensible, particular procedures might generate inaccurate judgments that over- or under-deter and over- or under-compensate.  The aggregate value of all valid claims in a group is the sum of each valid claim.  Opponents of sampling fear that relying on generalizations from statistics instead of adjudicating each claim individually may produce an inaccurate sum.

Edward Cheng’s short, thought-provoking article on trial sampling addresses concerns about accuracy by rethinking the relationship between sample size and litigation outcomes.  The article begins by acknowledging a theoretical problem that confronts any effort to use accuracy as a criterion for evaluating litigation procedures.  Accuracy is a goal that most procedural architects embrace in the abstract, but that is difficult to define.  The concept of accuracy in tort adjudication is especially slippery because critical findings are subjective or indeterminate: liability might depend on an assessment of reasonableness, causation might hinge on an inquiry into probabilities, and damages might require quantification of non-monetary harms.  If these findings are not objective, it is difficult to contend that a given procedure for reaching them is inaccurate.  However, Cheng contends that when comparing procedures, one can assume that both are trying to “estimate” the same “abstract value.”  If traditional claim-by-claim analysis is the conventional gold standard for adjudication, then one can assess the accuracy of sampling by replicating the assumptions that courts make when assessing individual claims.  This approach might conclude that sampling is relatively accurate compared to accepted alternatives without needing to consider whether it is objectively accurate.

Cheng argues that conventional wisdom presumes that claim-by-claim adjudication must be relatively more accurate than sampling.  The intuition is that adjudicating an individual claim ensures accurate results for that claim, so adjudicating each claim within a group ensures accurate results for all claims.  In contrast, resolving every claim based on data about only a few requires extrapolations that invite errors.

He then challenges conventional wisdom by making three observations.  First, he posits that trials of individual claims are not as accurate as commentators believe because of “variability.”  Individual trial outcomes are partly a function of jury dynamics and lawyer behavior that varies from case to case and distorts outcomes.  In contrast, a sample of several cases can smooth out variability, leading to an average outcome that may better approximate the “accurate” result to which claim-by-claim adjudication aspires.

Second, he argues that when a jury considers only a single claim it lacks a frame of reference for calculating the claim’s value.  Judgments from a large number of juries therefore may include outliers that are unmoored to a plausible sense of what claims should be worth.  In contrast, if a single jury receives a sample of several cases, it can “calibrate” its assessment of each to the others.  This calibration in theory could pull potential outlier cases toward a more accurate baseline.

Finally, Cheng contends that the adversarial system promotes accuracy by encouraging non-random sampling of “extreme” cases selected by each party.  Assuming a normal distribution, the parties’ self-interested selection of cases on each tail of the curve enables the court to quickly find the mean with only a limited sample.  The combined implication of Cheng’s three observations is that trying a small number of claims for a modest cost can produce a more accurate result for all plaintiffs than trying every claim at a huge cost.

Caveats abound, which Cheng is careful to note.  His theoretical predictions hold only in “the right conditions.”  In particular, excessive heterogeneity or an asymmetrical distribution among the plaintiffs could introduce sampling errors that reduce accuracy.  The article therefore acknowledges that more work must be done to develop criteria for identifying classes of cases where sampling would be more accurate in the aggregate than trying every claim.  (Empirical analysis or controlled experiments might also help determine if juries actually behave as theory predicts.)  Moreover, the article notes that even if sampling produces accurate aggregate results, regressing to a mean rewards individual plaintiffs whose claims are relatively strong or prejudices individual plaintiffs whose claims are relatively weak.  These distributional concerns raise normative questions about whether an accurate sum justifies distortion of its component parts.

Given the caveats, the value of Cheng’s article lies in how its analysis of counter-intuitive assumptions can reshape debates about the optimal approach to resolving clusters of similar claims.  By suggesting that non-traditional procedures can enhance accuracy in certain scenarios, Cheng challenges an important defense of the prevailing claim-by-claim approach to adjudicating mass torts.  This defense resonates in contemporary discussions of civil procedure and helps to explain the Supreme Court’s recent acerbic rejection of “Trial by Formula” in Wal-Mart Stores, Inc. v. Dukes.

Formulas are easy targets for judicial scorn if they produce inaccurate results.  But if Cheng is correct that sampling is more accurate than traditional adjudication in some circumstances, then commentators must confront two difficult questions when such circumstances arise.  First, what would be the justification for preferring a system of claim-by-claim adjudication that spends more money than sampling to achieve less aggregate accuracy with more random variability?  Second, if current inefficient procedures are necessary to faithfully accommodate the demands of substantive tort law, should the law governing mass torts shift its focus from individual plaintiffs to groups of victims?  Both questions have many plausible answers that are beyond the scope of Cheng’s article.  But by challenging conventional wisdom, the article helps sharpen the questions, refine the discussion, and suggest lines of inquiry about how to enhance accuracy in litigation.

Download PDF
Cite as: Allan Erbsen, Seeking Accuracy in Aggregate Litigation, JOTWELL (March 13, 2013) (reviewing Edward K. Cheng, When 10 Trials Are Better Than 1000: An Evidentiary Perspective on Trial Sampling, 160 U. Pa. L. Rev. 955 (2012)), https://courtslaw.jotwell.com/seeking-accuracy-in-aggregate-litigation/.