Congratulations to my friends and colleagues David Han and Greg McNeal, recipients of Pepperdine Law School's 2015 Dean's Award For Excellence in Scholarship for their outstanding 2014 publications:
Courts generally craft speech-tort jurisprudence as a binary proposition. Any time state tort law and the First Amendment come into potential conflict, courts typically hold either that the First Amendment comes into play and the defendant is completely exempt from traditional tort liability, or that it does not come into play and the plaintiff is entitled to the full complement of tort remedies. In other words, courts generally adopt an unspoken assumption that in speech-tort cases, liability and full tort remedies necessarily go hand-in-hand.
This rigid approach, however, significantly limits courts’ ability to craft a nuanced balance between First Amendment and tort interests. In individual cases, it forces them to choose only one set of interests to be vindicated to the complete exclusion of the other, and on a jurisprudential level, it gives courts only the bluntest of instruments to tailor speech-tort doctrine to widely varying facts. Furthermore, the current approach exacerbates the distributional problem inherent to speech-tort cases: any time the First Amendment intervenes to completely invalidate a subset of common law tort liability, plaintiffs left without liability or remedy are effectively forced to subsidize the costs of free speech, the benefits of which are shared broadly by the public at large.
In this Article, I argue that courts should incorporate a greater degree of remedial flexibility into speech-tort doctrine. Rather than simply adhere to an all-or-nothing approach, courts should consider intermediate approaches in which the First Amendment applies not to vitiate a finding of tort liability but merely to limit or eliminate the damages to which plaintiffs are entitled. These approaches allow courts to shape the complex balance of speech and tort interests with a scalpel rather than a chain saw, both on a case-by-case basis and on the broader level of doctrinal design.
In recent years, this remedy-based approach to speech-tort jurisprudence has rarely been discussed by courts and commentators, while the shadow cast by the First Amendment over tort law has expanded well beyond the defamation context. This calcification of a rigid, binary approach to speech tort cases represents a significant lost opportunity for courts to design more sensible and equitable doctrines. By providing a detailed account of the benefits underlying the use of flexible remedies, evaluating potential critiques to such an approach, and laying out concrete examples of what a remedy-based regime might look like in practice, this Article seeks to rekindle judicial, legislative, and academic interest in adopting such approaches within speech-tort doctrine.
When the government seeks to regulate speech based on its content, it generally does so on an assumption that listeners will process the speech in a manner that produces social harm. Because the chain of causation for such speech-based harm runs through the filter of an audience, courts must constantly make judgments regarding the audience’s reception of such speech. How will the speech be interpreted by the audience? To what extent will the speech cause the audience either to suffer direct emotional harm or to react physically to the speech in a harmful manner? Although this sort of inquiry — which I refer to as “audience analysis” — is integral in resolving a broad range of First Amendment issues, there has been little, if any, holistic examination of its general position and role within First Amendment jurisprudence.
In this Article, I first seek to introduce a degree of theoretical and doctrinal clarity to this aspect of speech causation. After tracing the primary causal paths by which speech may give rise to social harm on account of its content, I observe that each of these paths requires courts to make judgments regarding the audience’s comprehension of, or sensitivity to, the speech in question. I then outline how such analysis currently fits within First Amendment doctrine. Depending on the case, audience analysis can take place either at the front end, in the process of categorizing “borderline” speech, or at the back end, in the application of more generalized scrutiny analysis. These sorts of analyses often look very different from each other, and I delineate the different ways in which courts have approached them.
I then propose that audience analysis should generally be governed by a simple principle: courts should seek to determine, as accurately as possible, the extent to which the targeted audience would foreseeably process the regulated speech in a manner that produces social harm. In other words, courts should strive to conduct audience analysis based on a predictive view of how the targeted audience will likely process the speech, rather than on a strong normative view of how an idealized “rational audience” should process the speech. I argue that this basic principle should shape the tests that courts adopt to define low-value speech, promote greater solicitude for analyzing empirical data in scrutiny-stage audience analyses, and ultimately produce a more transparent jurisprudence that will provide courts with a clearer picture of the actual costs
Greg McNeal, Targeted Killing and Accountability, 102 Geo. L.J. 681 (2014):
This article is a comprehensive examination of the U.S. practice of targeted killings. It is based in part on field research, interviews, and previously unexamined government documents. The article fills a gap in the literature, which to date lacks sustained scholarly analysis of the accountability mechanisms associated with the targeted killing process. The article makes two major contributions: 1) it provides the first qualitative empirical accounting of the targeted killing process, beginning with the creation of kill-lists extending through the execution of targeted strikes; 2) it provides a robust analytical framework for assessing the accountability mechanisms associated with those processes.
The article begins by reporting the results of a case study that began with a review of hundreds of pages of military policy memoranda, disclosures of government policies through Freedom of Information Act (FOIA) requests by NGOs, filings in court documents, public statements by military and intelligence officials, and descriptive accounts reported by the press and depicted in non-fiction books. These findings were supplemented by observing and reviewing aspects of the official training for individuals involved in targeted killings and by conducting confidential interviews with members of the military, special operations, and intelligence community who are involved in the targeted killing process. These research techniques resulted in a richly detailed depiction of the targeted killing process, the first of its kind to appear in any single publication.
After explaining how targeted killings are conducted, the article shifts from the descriptive to the normative, setting out an analytical framework drawn from the governance literature that assess accountability along two dimensions, creating four accountability mechanisms. After setting forth the analytical framework, it is applied to the targeted killing program. The article concludes with accountability reforms that could be implemented based on the specified framework.