The Journal of Things We Like (Lots)
Select Page

Research on the federal courts often follows this basic pattern: 1) identify issue (often made salient because of a recent Supreme Court case); 2) analyze federal court opinions for cases relevant to that issue; and 3) write article. This process, which we might label “issue analysis,” has served, and will continue to serve, legal scholarship well. Issue analysis is very effective for evaluating and analyzing court handling of specific doctrines, statutes, and regulations. Less frequently, federal courts scholarship seeks to identify larger, often comprehensive, theories of how judges and courts behave, which we might label “behavior analysis.” Such endeavors can apply to the hundreds of thousands cases filed each year in the federal courts. As a result, researchers face significant problems not normally associated with issue analysis, including cherry picking suitable examples, confirmation bias, and inadequate treatment of contrary evidence. Empirical methods are a logical way to deal with those concerns, but publicly available datasets are few and human coding of legal documents can be extremely labor intensive and costly. Thankfully, these problems can be significantly curtailed by using computer-aided content analysis to evaluate large pools of cases.

Chad Oldfather, Joseph Bockhorst, and Brian Dimmer have published a wonderful article, Triangulating Judicial Responsiveness: Automated Content Analysis, Judicial Opinions, and the Methodology of Legal Scholarship, which illustrates exactly how such research should proceed. The research question they address is a very basic one: how do party briefs affect judicial opinions? One might think such a core question of litigation would have been addressed by numerous studies. However, as the authors rightly explain, the methods for addressing such a question are prone to the classic concerns with using behavior analysis. As a result, there simply has been no tested general theory of how briefs affect judicial opinion writing.

I highlight this article not necessarily for its actual findings, but instead for its careful and thoughtful methodology. Indeed, the authors recognize that the study’s findings are not their key contribution—the methods of study are where readers can learn the most. The article is more a proof of concept than an attempt to definitively answer the very broad research question addressed. Because of the sheer volume of courts-related articles published every year, it is quite easy to miss reading an innovative and significant scholarly work with modest conclusions but innovative, sophisticated, and well-executed methods. This piece should not be overlooked.

The authors study a sample of opinions and party briefs from 2004 in the United States Court of Appeals for the First Circuit to determine how the briefs influence the ultimate written opinions (what the authors refer to as “judicial responsiveness”). Initially, the study uses conventional human analysis of briefs and opinions to assess judicial responsiveness, with the researcher coding each case as either being “strongly responsive,” “weakly responsive,” or “nonresponsive.” Although extensive instructions and examples are given to aid human coders in differentiating the three variable options, the shortcomings of such a technique should be obvious.

They do not stop with reductionist human coding in their efforts to analyze the studied opinions and briefs, however. Where the study gets far more interesting is in the use of computer-aided content analysis to determine the similarity between party briefs and written opinions, analyzing the language and words in a document or set of documents. The judicial responsiveness measure is assessed along two dimensions using automated content analysis: word use similarity and citation utilization. The usage of similar words is determined using the cosine similarity method, which is often coded into plagiarism detection programs, allowing them to compare the word similarity between each party brief and the written opinion. For citation analysis, the computer determines the percentage of brief citations that are used in the judicial opinion. This type of analysis simply cannot be completed by humans. Word similarity in particular is essentially a machine-only enterprise. Citation tracking might be accomplished with a significant labor force, but such resources are rarely available to legal academics.

With three different coding techniques to deploy—one human, two computer—the authors find some very interesting data to compare. Significantly, both computer-coding techniques are strongly correlated with the human coding, despite the relatively small sample size. A reader might be skeptical of potentially subjective human coding, the difficult-to-comprehend cosine similarity method, or the limited value of citation appearance. However, that all three measures are correlated is in itself a remarkable finding. Regardless of the measure used, the results of judicial responsiveness are statistically similar. The correlation supports the statistical validity of each measure independently and strengthens the argument for using them collectively.

This is not to say that the three measures produce identical results. Indeed, the differences identified between the three techniques provide insight into potential refinements of content analysis techniques and directions for future work. The lower rate of judicial responsiveness based upon citation use highlights the limited value of such a measure and ultimately supports a hybrid scoring system. Further, the authors consider the potential value of more sophisticated learning computer algorithms that can increase the validity of the computer measures beyond the basic techniques they presently use.

Scholars might be confused, intimidated, or wary of computer-aided content analysis. However, this article illustrates exactly why academics should make greater efforts to engage with and to understand such tools. By using content analysis methods, a researcher can more reliably and validly answer research questions potentially covering large numbers of legal documents. The article is hopefully the first in a long line of studies that will use automated techniques to study the relationships among the varied pieces of paper that we, as legal academics, make the objects of our professional inquiry.

The careful and thoughtful use of these new tools in Triangulating Judicial Responsiveness provides a model for such future research.

Download PDF
Cite as: Corey Rayburn Yung, Opinions, Briefs, And Computers—Oh My!, JOTWELL (July 16, 2013) (reviewing Chad Oldfather, Joseph Bockhorst, and Brian Dimmer, Triangulating Judicial Responsiveness: Automated Content Analysis, Judicial Opinions, and the Methodology of Legal Scholarship, 64 Fla. L. Rev. 1189 (2013)), https://courtslaw.jotwell.com/opinions-briefs-and-computers-oh-my/.