Courts of Appeals in the United States are busy and, as numerous commentators and judges have pointed out, are unable to give many cases the attention they deserve. Attention supply and demand are currently grossly misaligned. Worse, there is little realistic hope of reducing the number of appeals or increasing the number of appellate judges. Appellate courts have responded by enlisting attention aids (e.g., more staff attorneys, more clerks), reducing attention commitments (e.g., fewer oral arguments, short and unpublished opinions), and reallocating attention from some types of cases to others.
These moves create numerous practical and normative problems. Perhaps most serious is that appellate judges systematically shortchange some types of cases (e.g., pro se, immigration, social security, and prisoner appeals) and lavish attention on other cases that have superficial markers of importance. Courts of appeals allocate their precious attention according to a cobbled-together set of proxies, heuristics, norms, historic practices, and shortcuts. None of this is to fault appellate judges; many seem unhappy with the current state of affairs and try their best with the limited resources that they have. But it is difficult to avoid the impression that the current hodgepodge of solutions is neither transparent nor efficient nor evenhanded.
Ryan Copus’ article is not the first to consider how courts can improve their ad hoc attention-triage systems, but he creatively pushes methodological boundaries to attack old problems from a fresh angle. He combines machine learning with an impressive dataset to answer how frequently appellate courts have reversed a particular type of case. Cases with low or high probabilities of reversal represent “easy” cases that typically merit little attention and are good candidates for a first look by staff attorneys. Cases in the middle represent “hard” cases that present opportunities to develop law and enhance predictability in the future. The article also uses traditional indicators of error to boost our confidence in the ability of the machine-learning-generated model to usefully predict error.
This is an impressive contribution in its own right. But Copus does not situate his article as the final word on appellate attention-triage systems. Instead, he posits that it is a “proof of concept” that merits further development. The courts have better and more detailed data that they could feed into machine-learning algorithms. Similarly, machine learning is not one thing but an opulent smorgasbord of approaches. Copus argues that, like corporations and other government agencies, courts could sponsor annual competitions to find models that maximize pre-specified, publicly communicated, up-to-date criteria.
As such, Copus’ article also helps us think about responsible and palatable uses of the riveting advances in machine learning. Copus’ proposal does not recommend or automate outcomes on appeal. Instead, it supports deliberate and informed attention-allocation decisions. Such a proposal is subtly mindful of the tendency of many experts to overestimate their ability to allocate efficiently and the capacity of machine learning to assist human decisions. Rather than handing our fate to the robot overlords or the corporations that built them, the article advises that we leverage machine learning to boost, not replace, human decision-making.
Despite these strengths, there are moments when I wish Copus was more diplomatic (e.g., when the current draft calls out a host of judges by name for frequent use of unpublished decisions in cases that the model identified as having high error estimates). Treading circumspectly might increase the likelihood of institutional buy-in. Relatedly, even though Copus presents only a “proof of concept,” a more detailed account of the data collection and model creation would enhance interested readers’ ability to identify features to keep and reject in designing future attention-triage systems.
On the whole, the article is an elegant celebration of innovation and iteration, carefully building on prior doctrinal work and preparing rich ground for further inquiry. There are, of course, many other areas of law where attention is in short supply and might currently be systematically misallocated. Entire classes of high-volume litigation and agency adjudications could benefit from transparent, principled, experience-driven attention-allocation systems.
Beyond exploring this approach in different settings, Copus’s article points to foundational questions about the role of attention in the law that have long-standing analogues in philosophy and psychology and that deserve further inquiry. How does attention and distraction shape who we are as lawyers and judges? Does and should law/lawyers/law schools prioritize some modes of attention over others? Is attention gendered and culturally specific? Is full attention a precondition for just adjudication? If we are what we pay attention to, then do judges change as their attention shifts? Does just adjudication require moments of mutual understanding of the fact that attention is shared by multiple people and trained on the same object? Those are questions well worthy of our attention.