The effect of artificial intelligence (AI) on legal services is one of the most pressing issues facing the profession and legal education. AI has enormous potential to improve efficiencies and reduce costs for clients across many fields, from due diligence to online dispute resolution. This potential renders AI a highly disruptive force in the legal profession. In The End of Lawyers, Richard Susskind asked whether lawyers have any future given the ability of machines to take on many of the tasks we once believed required human lawyers.
In Artificially Intelligent Class Actions, however, Peter Salib argues that in the field of class action litigation at least, AI may lead to more, not less, work for litigators. Salib explores the use of AI to manage large numbers of individual assessments of causation and harm among class members. Rule 23(b)(3) requires that common issues predominate over individual ones for a lawsuit to proceed as a class action. Thus, the need to prove individual causation in product liability cases, or to assess damages for thousands of class members, may be fatal to certification. In other words, some cases are too big to succeed.
Class counsel have attempted to overcome the predominance hurdle by using statistical methods to achieve rough justice for large classes. In effect, class counsel sought to replace an accurate but unattainable disposition of each class member’s claim with an efficient but imprecise one. The US Supreme Court famously rejected this approach, however, in Wal-Mart Stores Inc v Dukes.
Salib proposes that we overcome the problems of statistical sampling by using AI to deliver answers to individual questions that are both accurate and efficient.
He begins his methodical argument by identifying the principles behind the Court’s resistance to statistical proof in Wal-Mart. The plaintiffs proposed a trial plan based on statistical adjudication to avoid the necessity of determining each class member’s claim of unlawful discrimination and entitlement to backpay. A random sample of the class would have their claims adjudicated at the common issues trial, and the average results would be awarded to the rest of the class. The Court rejected certification because “trial by formula” was prohibited under Rule 23 and violated due process.
Salib, however, posits that the central concern in Wal-Mart was not so much about due process as it was about accuracy. After all, due process concerns do not trump other forms of aggregate proof—such as the fraud-on-the-market theory (pursuant to which individual class members are presumed to have relied on an efficient market once it is shown there was general market reliance) or regression analyses to quantify the harm of price-fixing. What drove the Court’s rejection of statistical sampling in Wal-Mart was that the methodology could not produce sufficiently accurate determinations of which class members had been subject to sex-based discrimination in promotion and pay.
Salib argues that AI can address the accuracy values animating Wal-Mart. Machine learning is not statistical sampling. Salib describes how “cutting-edge machine learning algorithms can be trained to provide high-accuracy, plaintiff-by-plaintiff answers to individual questions – like medical causation, individual discrimination, or reliance. They can accurately determine whether a particular class member – as opposed [to] the average one – relied, for example, on a fraudulent misrepresentation.”
The process goes something like this. A set of training data is collected. This is the data, such as decisions made by human adjudicators, that we want the machine to emulate. Algorithms then uncover complex correlations in the data, essentially ‘learning’ from the sample to emulate the decision-makers. The trained algorithm is then tested against another training dataset that was not used in the initial training process. If the algorithm reaches the same results as the human adjudicators did in this set, “the algorithm is likely to make accurate determinations about new cases – those for which the correct answers are not already known.”
Accurately predicting the outcomes of thousands of complex problems through AI is not new. Such advance machine learning is used to evaluate loan applications for creditworthiness and to help separating spouses make parenting arrangements and divide assets. Blue J, a company founded by two Canadian law professors, sells platforms trained on tax, employment, and other case law to predict decisions in these domains with over 90% accuracy. Their products are so accurate that even the Canadian government has begun using them to assist Department of Justice employees.
It is not unreasonable, therefore, to use the same machine learning in class actions where individual determinations threaten to overwhelm the common issues and thereby scuttle certification. For example, a sample set of judgments about causation made by an actual judge or jury at the common issues trial could become the training set for producing the algorithmic determination of claims across the class. In other words, “the algorithm would simulate, with high accuracy, the jury’s determination about whether each individual class member could show medical causation.” The unmanageability of individual proof – the downfall of many a class action under Rule 23(b)(3) – ceases to be an issue. Certification cannot be denied on the basis that individual issues predominate over common ones.
AI’s promise is perhaps most evident in class action settlements. By making class-wide adjudication possible, AI increases the chances of certification, thereby increasing the settlement value of the case. The ability to more accurately evaluate the value of each class member’s claim will also facilitate negotiations and promote settlements. And while Salib does not mention it, machine learning can revolutionize the complicated, costly, and time-consuming claims processes prevalent in many approved class action settlements. An AI-facilitated claims process could make efficient decisions that avoid both underpaying and overpaying members of the class.
Salib predicts and persuasively responds to a number of critiques of his argument. Might the parties aggressively dispute algorithmic design? Yes, but unlike other battles of experts, this one can be tested by applying the competing designs to a hold-out set of training data to see which generates more accurate results. Would algorithmic answers to individual questions be rebuttable, and if so, would wars of attrition be waged vis-à-vis these answers? Unlikely, because the cost of litigating such challenges would be prohibitive compared to the amounts in issue, which would be small given the slim margin of error in machine-generated outcomes. Can we trust machines to make decisions that do not entrench discriminatory bias in the justice system and in society as a whole? Salib is optimistic. Because AI class actions permit the party to whom the algorithm will be applied (i.e., class members, via class attorneys) to have a say in the design of the algorithm and to participate in the creation of the training data in the form of the evidence they adduce at trial, the risk of potential bias is minimized. The argument is familiar in class action literature: class attorneys’ financial incentives to maximize trial awards and settlement amounts should ensure the interests of class members are protected, including in the design and generation of training data.
The prevailing approach to Rule 23’s predominance requirement prevents many important class actions from being certified. The result is that most of these cases do not get litigated, given the high cost of individual litigation, among other barriers. For class actions to realize their true potential in delivering access to justice for mass harms, these cases, often involving important personal injury claims, must be resolvable by collective means. Individual issues are inevitable, but they are only fatal when they must be adjudicated in a time-consuming manner.
Salib shows AI’s promise to convert this expensive, lengthy process of individual assessment into an efficient and accurate one. To date, the best that class actions have had to offer is rough justice, if any at all. AI-facilitated class actions hold out the hope that the justice system can do much better than that.