A Time Capsule

Since the release of ChatGPT in 2022 until spring of this year, I sent occasional updates about the development of AI to my colleagues at the law school. In August of 2023, I shared a more substantial background paper, which you can read here. I thought it might be interesting to share all this here. Kind of interesting to see how things have developed over time.

AI Update 12/22/2022

Hi friends - I’m writing to let you know about a series of informal brownbags that the tech committee will host in the new year. If you are not aware of a new technology called chatGPT (chat.openai.com), you’re in for a surprise. I think it’s safe to assume that most of our students are aware of it. In a nutshell, it’s an AI-driven chat interface that has a remarkable ability to understand natural language, to remember context, to synthesize propositions, to carry on extended conversations about everything from constitutional interpretation to eigenvectors to mountaineering, and to write and revise poems, lyrics, stories, letters, computer code, memos, essays, and exam answers. Well, you get the idea. It is the most disruptive technology in our field that I have ever encountered—both for what it already is and for what it portends. While this capability has long been on the horizon, I had assumed we were a good number of years away from what seems to be here today.

I any event, we thought it would be a good idea to have an informal series of demonstrations and conversations where we could discuss how to approach this new tech in the classroom and how it might figure into practice. And, going forward, we can also talk more generally about new developments in AI and the emerging legal issues cropping up around its adoption and use. These technologies are controversial, subject to litigation, about to get much better quickly along some dimensions, and will hit walls in others.

I’m not announcing any particular times or dates today. We’ll wait until the rush of the first week of the semester is behind us. But I wanted to reach out today to suggest that you give it a try over the break if you haven’t already. Based on my use over the past weeks, I’ve already planned to use it in the classroom if possible. And I know I could benefit from other people’s thoughts, experiences, and practical ideas. It’s a little like figuring out how to teach math when graphing calculators (and Matlab and maple) appeared on the scene — in that it has caused me to reflect on what the essential skills and ideas we are hoping to instill actually are. A new, disruptive tool tends to do that.

I hope everyone has a peaceful, fun, and relaxing break. And I look forward to seeing you all again soon! Christian

Hi all - Following up on the meeting today, I am passing along a list of links that may be helpful in thinking through ChatGPT and related tech. We’re always interested in more, and so keep sending, as some of you have, links to items that could be helpful. And let me know if you’d like to help out. C

AI Update 1/25/2023

Overview and use of chatGPT and other AI

Higher ed pedagogy:

AI tools:

Society and limitations:

AI Update 3/23/2023

Hi all - Another update from the Is-AI-the-End-of-Everything team. We’ve split into groups to prepare a report — which we intend to be an updatable platform for information that will support law school staff and faculty as AI-based tools change over time. You may recall that my message in December included our plan to conduct informal ChatGPT brownbags this semester. A lot has happened since then. ChatGPT has been a cultural sensation and the volume of popular and academic writing and conversation about it has made it seem somewhat less important to hold “get to know the tech” brownbags. So too, the introduction of new competitors and new apps based on the underlying tech have made the whole area a fast moving target during these early days. We’d like to gather information and produce something more informed, concise, and helpful.

So we’re now focused fully on producing a report and information resource for you. The areas we’ve identified include:

  • Technical Background and Projections,
  • AI in Practice (e.g., current and projected used by firms, impact on legal ethics, and potential for disruption of existing legal markets),
  • AI in Pedagogy,
  • and Ongoing Institutional Engagement (e.g., role of library in maintaining and creating resources, training or classes to develop, the potential of acting as a clearinghouse for practitioners).

I’ve attached the names of the people researching each of these areas (their emails are in the Cc list). It would be helpful for them to hear from you if you had information you’d like to share with them, questions for them, or areas of uncertainty or emphasis you think should be addressed. (One colleague sent along a very interesting use of GPT in an app that might, eventually, have a dramatic impact on document review and the use of research corpora. More like this!) As well, we have discussed how each of these groups might welcome focus groups, interviews, or brownbags as a way of gathering and sharing information.

As always, I’m available to chat if you have questions or thoughts about these technologies or wanted to address it with your students more immediately. For example, I attended one of Lisa’s clinic’s sessions to talk about it, and it was interesting to see what she was doing with ChatGPT in that context. And I’m always happy to talk through how to approach rules for assignments or design of assignments. Our report will address these and other areas, but I and the rest of the tech team are willing to chat anytime.

I hope you all have a great spring break and look forward to seeing you soon, Christian

AI Update 12/2023

Hi friends - I’m writing with a brief AI update. I thought this might be helpful as your attention turns to next semester’s classes. First, I’m attaching a set of memos that our excellent LLM student, (name omitted), has prepared for me over the course of the semester. They collect summaries of and links to relevant articles up until the end of October. If you’re interested in sifting through the firehose of law-related AI writing, ssrn maintains a special topics hub https://www.ssrn.com/index.cfm/en/AI-GPT-3/. The core of the memo I distributed earlier in the semester is still a good guide to how these systems work and how they impact practice and pedagogy. An excellent, recent introduction to LLMs by Andrej Karpathy is available on YouTube, https://www.youtube.com/watch?v=zjkBMFhNj_g. I recommend it. (I went through his videos building GPT models earlier this year as I was learning how this tech works. They are also great and referenced in my memo from earlier this semester.)

Second, the state of the products:

OpenAI’s GPT-4:

  1. The context size of chatgpt on the web, for the paid version, has increased substantially to ~25,000 words. You can paste whole articles or cases into the prompt. For example, copy and paste a case; ask it to “write a dissent,” “summarize the holding,” “compare and contrast [another case],” etc.
  2. You can upload PDFs, images, and other documents and chat with chatgpt about them.
  3. It can generate images.
  4. It can search the web.
  5. It can interact via voice when using the mobile apps.
  6. You can build custom “GPTs” by uploading files and giving special instructions. You can then chat with new custom bot. For example, I created one by uploading our student handbook, resulting in a bot that could answer questions about our policies.

Google: Just released an update to bard.google.com, using a new underlying foundation model they call Gemini. Gemini will come in three varieties, a tiny model for on-device use in Android, a mid-sized model available now via Bard, and a plus model they’re suggesting is better than GPT-4 in many ways. There’s lots of reporting about this. I found this video to be accurate and helpful: https://www.youtube.com/watch?v=toShbNUGAyo&t=1s.

Facebook (Meta) has released a standalone image generator and is continuing to push open source alliances: https://arstechnica.com/information-technology/2023/12/ibm-meta-form-ai-alliance-with-50-organizations-to-promote-open-source-ai/

Apple is working in customary secrecy, and I’d expect to hear about its work in early June at its annual developer conference.

Lexis+ AI is now out. Westlaw has integrated the AI assistant tools from Casetext, which Thomson Reuters bought earlier this year.

Third, my takeaways at the moment:

  1. Next semester is the first semester during which all the common tools our students use—from operating systems, to word processors, to web browsers, to research tools—will have transformer-based large language models built in.
  2. If you’ve used these tools and found them lacking or a mere novelty (as I have!), do not let that make you complacent. It is hard to overstate the amount of investment and engineering that is being poured into improving them.
  3. While our exam-taking software can lock down the exam-taking environment, it will not alter the future or lawyering, which is about to change dramatically.
  4. Anecdotal observations are that firms are moving quickly to experiment and may already be thinking about headcount. Nothing is certain, but the chance of fundamental disruption of the industry (and thus of our industry) is nonzero.
  5. I will be integrating this tech into my classes going forward and will expect students to use these tools. For me, it will be part of my job to help them learn to think, write, and argue using AI assistance. I’m experiencing this as a dramatic discontinuity from my approach thus far in my career.

AI Update 4/2024

Hi all - I'm writing with another AI update. I cover the following three topics: The state of the art among large language models; The near-future of LLMs; Non-LLM AI tech. My main message is that you should consider spending some time with the paid versions of the major models if you haven’t already. See below for more on that.

###State of the art among large language models:

The big players among commercially available and useful large language models continue to be: OpenAI (GPT-4), Google (Gemini 1.5), Meta (Llama 2), and Anthropic (Claude 3).

OpenAI, Google, and Anthropic all off free versions of their latest models, but they also offer much more capable models with additional features for $20/month. With the paid models, you can attach large PDFs, discuss whole articles or even books, work with images, and work with models that are much more sophisticated and able to "remember" very long conversations. For example, you can paste it in a bunch of cases or essays and discuss them all without having the model forget the beginning of the conversation. These improvements represent an order of magnitude and sometimes two or more order of magnitude increase in the models' context windows.

I have confirmed with the Dean that we are authorized to use faculty research funds to purchase these subscriptions. I urge you to consider doing so, even if it is only to play around with them. A recent episode of Ezra Klein's podcast has a good discussion of what it's like trying to incorporate these new tools into work like ours. See https://www.nytimes.com/2024/04/02/opinion/ezra-klein-podcast-ethan-mollick.html. If you've only tried the free models, you are likely to be surprised by how capable these larger models are.

Be mindful that these services have different policies on the use of the data you provide. All permit you to opt out of having your data used in further training, but it is sometimes inconvenient to do so. There are also options to purchase group or enterprise accounts, but these are more expensive per seat and do not seem to offer advantages, as yet, for our use cases. It is possible, however, that a clinic or working group might benefit from OpenAI's Team plan. See https://openai.com/chatgpt/team.

Here at the university we also have free access to the Microsoft Copilot (https://copilot.microsoft.com - sign in with your MyID). Copilot uses some incarnation of GPT-4 under the hood, but the context lengths appear smaller (4k or 8k in chat, 18k in a somewhat clumsy notebook interface) than the paid ChatGPT version of GPT-4, and I wasn't able to get quite as good results from Copilot. But it is free for us to use, and you might get excellent results from it.

Google just had an event announcing some improvements to Gemini and further integration into their existing services. I haven’t digested this yet and have not used Gemini very much myself. But Ben Thompson’s thoughts are always worth reading: https://stratechery.com/2024/gemini-1-5-and-googles-nature/.

The best results I have obtained discussing law and legal theory have come from Anthropic's Claude 3 Opus model. (Go to Claude.ai.) I'm going to oversimplify here, but: From a single prompt I get B+ to A- work, and with a little extra prompting, I get A+ answers to exam questions, fairly deeply considered summaries and critiques of scholarship, and citations that are much more likely to be real than hallucinated. I was able to upload a PDF of a county's zoning ordinance and get accurate answers to whether I could build an accessory dwelling unit, how to appeal an adverse decision, etc. We're still at a point of "sometimes great, sometimes not so great" with these frontier foundation models, but when they're good, they are very, very good. And conversing with them, learning how to prod and how to correct, often leads to amazing places. This video does a nice job explaining how the current generation of models stack up against one another: https://www.youtube.com/watch?v=ReO2CWBpUYk.

I don't have anything to report about Lexis or Westlaw. In my brief use of the Lexis offering back in January, I found it slow and disappointing. I expect it will improve (and may already have improved), but my money is on the major LLM companies and on startups that leverage those companies' offerings. For example, Harvey.ai is a startup that has partnered with OpenAI to fine-tune GPT4 for case law. (Fine-tuning involves starting with a large, fully-trained model and then conducting some additional, specialized training to nudge the model's billions of parameters in ways that better model a specific domain, like law.) OpenAI reports that Harvey was fine-tuned with "the equivalent of 10 billion tokens worth of data." And they claim that "[t]he resulting model achieved an 83% increase in factual responses and attorneys preferred the customized model’s outputs 97% of the time over GPT-4." (See https://openai.com/blog/introducing-improvements-to-the-fine-tuning-api-and-expanding-our-custom-models-program -- and scroll down about halfway to see a box containing comparisons of GPT-4's and Harvey's answers to the question "What is a claim of disloyalty?" See also, e.g.: https://www.robinai.com, https://www.darrow.ai, https://www.evenuplaw.com.)


The near future of LLMs:

Meta's Llama 3 likely will drop in the summer, with smaller versions of the new model launching as soon as next week. OpenAI has begun and has perhaps finished training GPT-5, with safety evaluation and testing likely to take several months. They will probably release some sort of GPT update over the summer, maybe a GPT-4.5, but many are speculating that GPT-5 might not be released until after the election. GPT-5 may include techniques (some understood already and some the subject of speculation (just go down the "what is Q*" rabbit hole...)) that will dramatically increase its abilities to reason and to plan.

I cannot emphasize enough that there is a realistic prospect that we are not far away (five years of less?) from a model that outperforms every living attorney in research, writing, and advising tasks under just about any conceivable metric. This is not at all a certainty, but it is astounding to me that it is a real and near-term possibility. What we should do, what we should insist on from our law and lawmaking processes, what we teach students in a world where machines handily outperform us — all these questions have been constantly on my mind, and I hope we can think productively about this together in the coming months and (hopefully) years.


Non-LLM AI-related technologies:

OpenAI's Sora: I'm not going to say anything more about this than that OpenAI has demonstrated a working model that can take text prompts as simple as "Historical footage of California during the gold rush." and produce mind-blowingly realistic video as output. Just go to https://openai.com/sora and check it out if you haven't seen this already.

AI and search: You might have heard of https://perplexity.ai, which aims to replace the google search box with an AI prompt box. Just go and test it out. I have found more useful, though, a paid search alternative, called Kagi (at https://kagi.com. Kagi is a drop-in replacement for google with much better privacy terms but also a number of great features and less of the clutter that has been plaguing google in recent years. One feature I find particularly useful is the ability to read a bullet-point, AI-generated summary of any link without visiting the link. And this includes academic articles. In fact, you can restrict your search to academic articles via a pulldown. Whatever your search engine, you might try exploring the settings to see what AI-enhanced features it has.

Other projects: I was particularly impressed with the explosion of AI projects featured in this thread rounding up some YCombinator-funded companies. https://twitter.com/snowmaker/status/1773402574332530953?s=20. They're things like weather forecasts generated by AI rather than more compute-intensive traditional weather models, text to video, AI-generated voices, and the like. And here's another example: https://app.suno.ai -- just try it out (and word is that there's apparently a competitor about to launch an even better tool for this purpose). Also, we're at the cusp of some very impressive advancements in robotics. All this is enabled by the parallelization of training and inference that transformers unlocked.

Research uses: You might want to think about whether any of your scholarly interests have aspects that might be modeled. Any kind of question that can be approached as a next-token prediction task can possibly, given sufficient data, be modeled using a transformer. It might not be obvious that a given task is susceptible to this approach, but creative re-description can sometimes convert a problem into one that can. (See, e.g., voice, video, images, music, movements of robotic limbs, financial data, etc., all of which have been modeled using transformers by defining the problem as a next-token prediction task.) Think of transformers as computer programs that can speak extremely exotic "languages" if trained to do so. UGA maintains high-performance computer clusters, and I've started to investigate how we might be able to use them, perhaps in cooperation with other departments. Please get in touch if you have research questions that might be able to take advantage of an AI approach.

AI Update 2/2025

Hi all - It's been a long time since my last AI-related missive. Part of the reason for the delay is that AI has become so ubiquitous in the news that's there little I can really add. And, indeed, the big players (OpenAI, Google, Meta, and Anthropic) are still the big players (with one recent arrival). And their latest products, mostly accessible via $20/month subscriptions, are covered as major news, not tech stories.

Here's my big message: Whether the "intelligence" of models continues increase at the rate it has, overcomes significant gaps in certain reasoning and world-modeling tasks, or doesn't for a long, long time, we haven't yet come close to capitalizing all that the tech can already do. That means that even if we shut down all new LLM research today, creating new technologies and services based on existing models would still be incredibly valuable and disruptive. As models move out of chat boxes and increasingly into services suited to them, we'll see more of this. (One tangential point that seems reasonable to insert here: The key to having a decent appreciation of what these things are is to remember that, relative to deterministic computing as we've all come to know it, they're good at the kinds of things we're good at, not the kinds of things you'd normally think a computer would be well-suited to do. They struggle with things we struggle with. But unlike us, they don't get tired. And there are both important respects in which they are super-human and many respects in which their reasoning and understanding of the world is laughable. All at the same time! Like I wrote in my first report, best to think of them as alien intelligences.)

Anything that could be improved by being hooked up to a relentless reasoner stands to get the AI treatment. Early efforts include Anthropic's computer-using agent and OpenAI's Operator. These work by using specially fine-tuned versions of their best models to operate a computer. They work... ok but not yet well. In terms of what they do, think: "Computer, find out all about what ultralight camping gear is on the market, including pros and cons, and go ahead an buy gear that improves on what I have at prices that seem especially good." Again, these models are in many ways better suited to this task than to counting the number of r's in the word "strawberry" -- the successful and reliable doing of which was a major milestone in LLM tech last year. But they still, for safety and reliability reasons, interrupt you a lot to ask for guidance or just plain fail.

Here's a summary of where we are on the foundation models that you're probably familiar with and some you may not be:

Anthropic. Latest model: Claude 3.5 Sonnet. Still some of the best output for what we do. Still somewhat expensive. Poised for a new release. Early efforts at computer-using agents not ready for prime time.

OpenAI. Latest model: o3-mini. Most people are still using 4o, the default in the ChatGPT interface. Coming soon is their best known model: o3. The o-models (but not 4o) have been specially trained as reasoning models. These are models in which the basic LLM architecture that predicts next tokens (like the human conceptual system) is deployed to think as well as speak. Thinking is just a series of words, just like speaking, but this series is not (entirely) shown to the user. Just like your own thoughts, these thinking tokens are spent trying to figure out what to say. The speaking then happens after the thinking. These thinking models have been trained on how well they reason (either based on raw results, on the quality and correctness of the reasoning chain, or on some combination). The capabilities of this architecture, at its limits, are staggering. But it's not, yet, cheap.

Google. Latest model: Gemini 1.5, with Gemini 2.0 Flash available in experimental mode. (2.0 Flash seems great, but my own use of it -- see below -- has shown it not to be reliable enough to use routinely. Very much looking forward to an improved release.) Google's models still lag the competition. But Google has done some very interesting things with them. Many or most of you have probably heard about NotebookLM (https://notebooklm.google.com -- through which you can generate, among other things, on-demand, life-like podcasts based on documents you upload. It was one of the mind-blowing developments last year. In December, Google announced updates (https://blog.google/technology/google-labs/notebooklm-new-features-december-2024/) that would allow you to join this synthesized conversation in some way. If you haven't used it, try it out. It's free and one of those oh my god moments.

One more thing on Google: They announced Deep Research late last year. I haven't used it yet. Read about it here: https://www.tomsguide.com/ai/google-gemini/i-just-saw-the-future-of-the-web-googles-new-gemini-deep-research-ai-agent-is-incredible. You can compare it with OpenAI Deep Research that just dropped today (but only to the $200/month (that's two hundred dollars) Pro subscribers, with regular subscribers "next"): https://openai.com/index/introducing-deep-research/.

These are the sorts of tools that will combine exhaustive research with whatever reasoning and writing capabilities are available in the best models. We should start using them asap. Because of the price of the best agents, it's easy to foresee wealth even further compounding and better-resourced organizations moving yet further ahead.

Meta: Llama 3 is still their best model. Still open source. They're promising multiple Llama 4 releases this year. See generally: https://ai.meta.com/blog/future-of-ai-built-with-llama/.

DeepSeek: Chinese startup DeepSeek has been all over the news. They released an excellent chat model (like 4o or Claude) and a very competitive reasoning model (like o1 or o3-mini) at very, very low prices. It's unknown to what degree the products owe their capabilities and low cost to ingenious training architecture, skirting export bans, and distilling the outputs of other models. And so it's hard to say whether these models are harbingers of an incipient commodification of intelligence (which would lead to the proliferation of apps). DeepSeek models have been either down or unreliable since last week from some combination of unanticipated demand and cyberattacks. But they worked very well when I tested them a couple of weeks ago.

Finally, one way I've tried to understand better what current technology can do is by building an app myself. The tech helped immensely with coding, and the resulting app leverages AI services to do what it does. It was a deep dive. Indeed, the process of doing it from start to finish and seeing what was possible was the goal. But the app itself has proven to be, at least for my use, pretty damn cool. Upshot: it's an iPhone app (not out yet) that will take a PDF or text and allow you to generate audio narrations (think of clean audiobook-like narrations of ssrn articles on demand), seminars, drafts, oral arguments, debates, critical responses, and whatever else you can dream up based on the input. I just published a blog post about it here: https://www.hydratext.com/blog/2025/2/1/entalkenator-a-tool-for-thinking.

Relevant to us as a faculty is this experience: I have a template in the app that creates first drafts. You give it anything, and it creates a draft of about 3,000 to 5,000 or so words. I googled "circuit split," found the very first one that came up, copied the text from the decision in the case I found first, and prepended this text to the file: "This case out of the tenth circuit creates a circuit split with the fifth circuit. The Supreme Court should take the case and [affirm]." The app generated the attached (which I copied and made into a PDF without alteration). Yes, it's not great, but I didn't cherry-pick. This was just based on the very first output of a simple template calling for an outline with three parts and three subparts in each part and then, step by step, to draft each part. The template is not law-specific. It's not using the best model available - just Sonnet. I could have made it nine parts, called for certain kinds of reasoning or attempts at citation. I could have made it arbitrarily longer or shorter, could have split up subsections into more or fewer parts, could have had it output in Icelandic. And this writing and analysis is as bad as AI will ever be. I made an oral argument too. (My critical response template generated some scathing critiques of my articles, and so I'm all good on that front for awhile.) And over the past couple of months, I've generated workshops based on our workshop papers in advance of our own, actual workshops. And for all this, my guess is that the app is obsolete within months.

Ok - to make a long email one paragraph longer, I admit to being overwhelmed by all these developments, even setting aside events in the broader world. Here's where I come down: It is imperative that we continue to think about what lawyering and the rule of law ought to look like in a world of ubiquitous artificial intelligence. What exactly will our students be doing in five years? What can and should we do to have some influence over a question like that? Will the more specific professional skills we teach be commercially valuable in five years? (I think there is a reasonable — meaning significantly nonzero — chance they will not be.) As I've told many of you before, whatever the answers to these questions, I think it is essential that law as a communal concept be part of the humanities and that we continue to insist education in the humanities is at the core of preserving law's value. Education that leads us to understand ourselves and others more deeply is essential to the rule of law, and that basic need won't disappear when the models out-think us.

AI Update 3/2025:

Hi all - A few updates: some practical news worth sharing and an opportunity. I had intended and briefly scheduled a lunch time session for demonstration of some of these items. Some unexpected events cause me to cancel this and not reschedule new ones. But if any of you would be interested in Q&A, demos, conversations in April, please let me know. I’d be happy to present, to facilitate, to demo, or just to listen.

To start off, here's a blog post by an Google Deep Mind researcher and former LLM-skeptic that reflects in longer form and much better detail what I’ve relayed to you all in prior updates: https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html. The opening lines are a good summary: "I have very wide error bars on the potential future of large language models, and I think you should too. Specifically, I wouldn't be surprised if, in three to five years, language models are capable of performing most (all?) cognitive economically-useful tasks beyond the level of human experts. And I also wouldn't be surprised if, in five years, the best models we have are better than the ones we have today, but only in ‘normal' ways where costs continue to decrease considerably and capabilities continue to get better but there's no fundamental paradigm shift that upends the world order.”

As a faculty and as individual scholars and teachers, I think we need to reflect on the possibility that even in the less performant scenario, our industry will be heavily disrupted. I cannot overstate this. While there is still a possibility that the barriers to progress are so substantial that LLMs remain mere “tools,” I’ll bet otherwise. (For more on this, at least one of you mentioned a recent episode of the Ezra Klein podcast with a former Biden AI czar as alarming you concerning the potential near-term developments. Relatedly, it really does seem to be the case that there has not been a Manhattan-project-style effort underway to ensure superior AI capability at the national governmental level. That’s, to put it mildly, surprising and alarming to me. But for a recent effort to think about national AI strategies, see here: https://www.nationalsecurity.ai/. And, to extend the free association of this parenthetical to an even more annoying length, a recent conversation I had with Claude about the scenarios by which AI might escape its confines in code and datacenters opened my eyes to a surprisingly rich world of possibilities.)

Anthropic: Updated Claude Sonnet to 3.7. This latest update includes an optional reasoning mode. Remember that when you hear about a “reasoning model,” we’re still just talking about an LLM using the transformer architecture I described in my first report to you all. It’s fundamentally still outputting a sequence of tokens (words or parts of words) using the unfathomably large number of parameters to do so. The difference is that a reasoning model only shows some of these tokens to the user, much like we think in chains of words or other sensations but only speak or write some of them. So a reasoning model can “think” (output lots and lots of tokens in the background as it tries solutions to probes) before it speaks. The engineering to create these sorts of models focused on how to reward the model (nudging its many parameters) for “good” chains of thought and punish it for “bad” chains of thought.

In my testing, the new Sonnet produces longer outputs. For example, in my app when I use the "first draft" template (which asks the model to prepare an outline and then, iteratively, to produce a paper with three major parts, each with three subparts), the old Sonnet would generate about 10,000 or so words, and the new sonnet tends to produce near 20,000 words. I think the quality is far better for some applications but only marginally better for others.

OpenAI: The big news here is that Deep Research is now available in the $20/month subscription. I think you get 10 Deep Research queries per month at that tier. Deep Research is amazing, and it uses a specially fine-tuned version of an otherwise unreleased OpenAI model o3. You should definitely give it a spin. I have had occasion to engage heavily with all these models to evaluate a complex medical condition for a family member. I used deep research, put the output of that into Claude, asked o1 whether it concurred with Claude (or, more accurately, whether it agreed with “my medical students who generated the following report”), asked Claude to criticize o1’s theories, etc. The results were stunningly detailed, accurate, and more reliable than the actual medical providers involved.

OpenAI also released GPT 4.5 — its older style, non-reasoning model. It is expensive, slow, and not much better than 4o (the default model in ChatGPT). This is the model on which was based all sorts of reporting last year on leaks concerning disappointing results from further scaling up existing models. It does seem that just increasing model size without other architectural changes might have been hitting a wall.

My app, enTalkenator: I’ve just pushed an update that should appear on the iOS App Store later today or tomorrow. The current version (once it is approved and appears in the store) is free to use for almost all features: https://www.entalkenator.com/ (you pay the model providers as you use them)— and I’ve started a podcast that updates twice weekly. It comprises (a) faculty workshops based on Larry Solum’s download of the week and (b) brief, introductory classes on new papers in other fields. For example, the episode I just pushed is a brief class on the groundbreaking results reported yesterday about dark energy. It’s kind of amazing to put in a paper I don’t understand at all and get a class out that makes it understandable. And kind of hilarious to see the output that results from a template that creates a highly critical response paper. When there’s a new paper that interests me, I sometimes just dump it into enTalkenator and generate a workshop. As I indicated in my last email, seeing it generate a paper from a few notes is eye-opening. It doesn’t always do great. But it can generate relentlessly and gets better as the underlying models get better. Normal science legal writing and analysis is now automated.

New opportunity: As some of you know, Matt Hall and I are on a UGA seed-grant team focused on AI ethics. We have members in philosophy, CS, Terry, and SPIAA. We’re applying for a grant that uses methods I developed making enTalkenator to create an application that allows people in a conflict to engage in a mediated conversation — where the system acts as mediator and go-between. The innovation here is that unlike talking to ChatGPT about a conflict you’re having, this system would interact with you based not only on its training and what you type into the chat window, but also the context from the other participant. The system would thus be a kind of “empathy machine” helping to translate perspectives, potentially across cultures. We think this will unlock a lot of research avenues. Even restricting attention to law, we envision human trials comparing the app as mediator to human mediators, e.g. But there’s a lot more that could be done. If this sounds interesting to you or if you can envision applications of such a system to work we may not have thought of, I invite you to get in touch.

That’s enough for the moment! C

AI Update 4/25:

A lot is happening, and what I intended as quick addendum to my recent March update has grown long. (And I didn’t even discuss recent gee whiz demos like https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice##demo).

(1) I’ll be hosting an AI brownbag in Cheeley on Monday, April 14. On the agenda, in addition to open discussion, are demos of OpenAI’s Deep Research (not be confused with the lesser Google product having exactly the same name), o1-pro, whatever they drop between now and then, and my enTalkenator app. I hesitate to do any self-promotion here, but the app is free for everything you’d likely want to do with it - and just get in touch if you want to do more. Orin Kerr succinctly blogged about it here: https://blog.dividedargument.com/p/when-artificial-intelligence-workshops — I think showing what the app can do in terms of paper-writing, class and workshop generation, etc. is a good way to begin to see what’s coming to our field.

(2) Speaking of Deep Research, OpenAI has made 10 uses of their Deep Research tool available per month to the $20/month tier. If you have an OpenAI subscription, you should definitely try it. It was formerly only available to the (very) expensive subscription tier.

(3) And… speaking of the $20/month tier, OpenAI just announced yesterday that this subscription tier will be free to all US university students for April and May (just in time for finals and final papers). This will give students, for free, the aforementioned 10 deep research queries as well as more access to o3-mini and their recently launched new image generation model (built into ChatGPT). In addition to being much better than prior image generation tools, it permits the generation of images depicting identifiable people, including public figures.

(4) Relatedly, Anthropic (whose Claude 3.7 Sonnet is still my go-to) announced a Claude for Education plan that features site-wide licenses and new model products, including a socratic, guided learning mode. https://www.anthropic.com/education. I don’t know whether UGA’s participation in the OpenAI (https://news.uga.edu/uga-joins-nextgenai-consortium-to-accelerate-ai-innovation-education/ would have any bearing on exploring these kinds of opportunities.

(5) Google finally released a model that I think is competitive with Anthropic’s and OpenAI’s offerings. Gemini 2.5 Pro Experimental is very, very good. I’d put it neck and neck with Claude 3.7 Sonnet and above OpenAI’s models in my usage. And it beats those models in many benchmarks. It’s available to all Gemini users.

(6) One of the things I’d like to hear about from you — and to discuss at the April 14 brownbag — is what our norms (and perhaps ultimately rules) should be concerning the use of AI in our communications with each other and in our scholarly work. There are some seemingly easy answers here (always disclose usage) that strike me as likely to become problematic in the medium to long term as AI tools become embedded components in the tools we all use. See also the literature on the limits of disclosure as a regulatory tool. My own practice thus far has been not to use AI at all in my memos to colleagues or in these updates. And I haven’t used it in anything I’ve written on the scholarly front. (Though I did use my app to generate papers from notes for papers that I’d decided not to pursue.) But I cannot say that either of these practices is wise, helpful, or reasonable. Anyway, this is an increasingly pressing issue, and I think we should begin talking about it this semester.

(7) One more thing — on the research front, I found two recent papers from Anthropic researchers (linked and explained in this post: https://www.anthropic.com/news/tracing-thoughts-language-model to be somewhat mind-blowing and should be mandatory to consider if you’re inclination is to think of LLMs as “stochastic parrots” or “just regurgitating training data." If you subscribe to the enTalkenator podcast feed (linked in the Kerr post), you’ll find intro classes on both papers and an academic workshop on one of them.

That’s it for now.

Christian

AI Update 4/25, part 2

Hi all - Here’s a second AI update for the month of April. Feels like enough has happened to warrant an update.

Some updates:

(1) Our AI brownbag was very helpful for me and I hope also for those who attended. Those attendees who have been using these tools a lot had what I consider to be fairly existential questions about lawyering that were prompted by what they’ve experienced. We also learned a little about the extent to which at least some clinics have been affected by AI use by both students and by clients. (This is as good a place as any to suggest to those of you who haven’t played around with LLMs much that using one of these models is as easy as going to Claude.ai, gemini.google.com, or ChatGPT.com in a browser and just typing into a chat box. You don’t have to type in any particular format or sound smart or think too much. You can be unafraid of sounding like a dum-dum. Just start typing about something you want to figure out. And the models and you will likely jointly steer your clickety-clack toward ideas and answers.)

In any event, AI now seems to be a ubiquitous part of at least some practice areas on both the lawyer and client sides. Some seemingly easy responses in both practice and law school contexts, such as use restrictions or disclosure mandates, do not some feasible in the medium to longer term. There was consensus that we need to have some very serious conversations as a faculty about fundamental issues in the coming months.

Further to this conversation, some recent writing has helped me think through my own uncertainty (falling to the hype side of the stochastic-parrot skeptics and to the skeptical side of the singularity-is-upon-us crowds). First is Ethan Mollick’s post, which I highly recommend, on the “jagged” intelligence of the latest crop of models — the way they far exceed human intelligence in some tasks and fall far below it in others: https://www.oneusefulthing.org/p/on-jagged-agi-o3-gemini-25-and-everything. Second, is the set of Anthropic papers I linked in the last update: https://www.anthropic.com/news/tracing-thoughts-language-model (uses models to interpret the circuitry of concepts their models use to arrive at next tokens), and third is a skeptical take: AI as Normal Technology, https://knightcolumbia.org/content/ai-as-normal-technology — arguing that we should conceive as AI as a normal technological development rather than as an alien super-intelligence and that its adoption and resultant changes will take time to percolate through the economy — and that this sociology and economy of change has implications for regulatory policy. Be aware, though, that when the authors say “normal technology,” they’re conceiving of AI as an innovation comparable to electricity or the internet — so a huge deal but not qualitatively distinct from other tool developments

I used my app to create a roundtable conversation about these three articles (and a summary up front for the listener) and put it into the enTalkenator podcast feed. You can listen here: https://www.entalkenator.com/podcast/a-roundtable-on-three-ai-articles-normal-tech-biological-or-jagged-super-intelligence — or, better yet, search for enTalkenator in your podcast app of choice and listen there, where you can listen at 1.5x or 2x.

A fourth article along these lines is a popular piece that appeared in the Guardian on April 19: https://www.theguardian.com/technology/2025/apr/19/dont-ask-what-ai-can-do-for-us-ask-what-it-is-doing-to-us-are-chatgpt-and-co-harming-human-intelligence — on whether using AI will harm our intellectual faculties. I found it resonant with discussions I’ve had here and around the university about creating the conditions for humane intellectual struggle that leads to learning and about the value of what we do generally in universities. (The value in teaching law certainly includes the intellectual growth that comes with any learning about ideas but is also normative, in the sense that it perpetuates collective human authorship of the conditions of our collective life. But that’s a longer story…) This article is more suggestive than strictly illuminating, but I bring it to your attention as gathering a set of concerns that we should have as educators.

(2) OpenAI recently released their next generation models: o3 (their most advanced reasoning model) and o4-mini (the smaller version of what will be their next-generation model). 4o, the default model you usually use when just typing into ChatGPT has been updated fairly recently, including a dramatic improvement to and liberalization of the usage of its image generation capabilities. It’s not just you, OpenAI’s product naming has become the butt of jokes, even in their own product launch videos.

With a standard $20/month OpenAI ChatGPT Plus account, you get enough access to 4o that you should not have to think about it, 100 messages a week with o3, 300 messages a day with o4-mini, and 100 messages a day with o4-mini-high. As mentioned in my last update, you also get 10 uses per month of Deep Research (you click or tap the little telescope in the chat field to invoke Deep Research).

Note also that you can click the Search icon in that same field to prompt the model to search the web in providing an answer. The use of tools, though, like web search and coding, is increasingly built-in to these models, and you’ll notice more citations, more tables, and other capabilities.

(3) No substantial updates tech-wise on Anthropic. But this report: https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude — on how college students in different fields have used Claude is interesting. Oh, and by the way, if you use Claude 3.7 Sonnet, which seems still to be my go-to for at least the moment, clicking on the + in the chat box brings up options, including the ability to turn on “extended thinking.” This activates Claude’s reasoning mode, akin to o3, o4-mini, and Gemini 2.5 Pro. Keeping it off makes the model a more traditional LLM like GPT 4o.

(4) Google’s Gemini 2.5 Pro continues to impress me, and it might just be the best available model. Google has now released an experimental version of Gemini 2.5 Flash, which as its name suggests, is a faster less-capable-but-often-good-enough version of 2.5 Pro. Notably, Google’s own Deep Research product now uses 2.5 Pro and is much more capable.

You’ll recall from earlier updates — and from your own reading probably — that models have a limited “context window” of tokens of which they keep track in a conversation. It’s a little like short-term memory, where if your chat (including files you upload) exceeds the size of the context window, the earlier parts of the conversation roll off, forgotten. This was a big problem with the first generation of models. The original ChatGPT could only keep about 3,000 words in mind at a time. So your conversations with it would become frustrating when not-so-recent parts of your conversation were forgotten. GPT 4 increased its context window to 32,000 tokens, and Google’s Gemini introduced very long context windows of a million or more tokens.

Maybe more important, though, than the size of the context window is how good the model is at being able to work with all that information. One measure of that is the “needle in a haystack” performance — the model’s ability to pull out a single bit of data from its context when asked. But even more important than that is the model’s ability to reason over a very large context - not just pluck out a stray word. Gemini 2.5 has a million token context window (about 750,000 words), is potentially capable of at least 2 million tokens, and reasons very well over more than 100,000 words (perhaps over the whole 750,000). OpenAI’s o3 and o4-mini (200,000 token context windows or about 150,000 words) show similar strengths.

My point here is that the effectiveness of LLMs’ abilities to reason over long texts (many cases, several articles) has exploded since my first message to you about LLMs back in 2022. This unlocks qualitative changes in use cases. Whereas before you could maybe talk about a portion of an article or a single case or maybe two, now it might be possible to have intelligent conversation and analysis of more than a dozen long articles at a time.

(5) Over time, I expect the model offerings from the major vendors to become much simpler from the perspective of the user — with a single chat-like interface, whether text, audio, or video, that will invoke the model, tools, and capabilities appropriate for the task, without the user’s having to think about or select anything. Just a chat box, and the model takes care of the rest.

I also expect the debates about AI to continue toward emotional polarization. This is unfortunate, I think, and yet another ill effect of broader cultural splintering. There are serious issues concerning environmental impact, workforce displacement, effects on expertise and general intelligence, the leveraging of bad actors (see, e.g., the recent reports on o3’s facility with virology: https://www.ai-frontiers.org/articles/ais-are-disseminating-expert-level-virology-skills), the contingent role of distinctively human and culturally-specific moral and legal reasoning, and, more speculatively, the potential runaway scenarios that international non-cooperation might make possible. There is a better way than uncritical buy-in or using “AI slop” as an all-purpose epithet. More on that at another time.

Ok, enough for now. Always happy to chat with any of you individually or collectively! Christian

p.s. Count me among those who do not believe disclosure can be the stable cornerstone of AI ethics in the longterm. But for now, because some of you have raised the issue, I will say that I have not used and do not plan to use AI in any of my actual academic writing or in any of these updates or other messages to any of you. (Thinking and feedback, yes, writing or even paraphrasing, no.) I do not suggest that that should be our norm or should otherwise be an ethical principle to which others should adhere. I’m uncertain, to put it mildly, about that. But for now at least, if you get something from me, it’s from me.

enTalkenator: a tool for thinking

I've made a new thing. I think it's amazing. It's the kind of project that would have seemed like magic when I was a child. As few as five years ago, I would never have dreamed I'd ever even use such a thing, let alone make it. The app is called enTalkenator, and while it feels to me like magic, it also kind of sucks. I can't wait for it to be better.

enTalkenator Is Magic

enTalkenator is a soon-to-be-released app (UPDATE: It's out!) that turns documents into clean transcriptions, translations, narrations, and entirely new conversations and articles. You give it a PDF or some text, and it can generate a faithful audio narration, an academic workshop, a podcast episode, a critical reply, a first draft, or whatever else you can dream up. I've always wanted a straightforward app that would turn academic PDFs into listenable audio — stripping out headers, footnotes, and the like — without fuss. A "podcast" app for articles. With LLMs this is now possible. Not only that, but you can drop in a Spanish text (even a handwritten one) and get an English or German text out.

Have a half-baked draft, an outline, or some notes? Drop it in (perhaps without any AI cleanup at all), and then select the option to generate a first draft, a workshop, a debate, or whatever else you want. Have a chapter of textbook and want to generate a summary, an explainer, or even a paper? It can do that. When I was showing a friend the app, I pasted in the Gettysburg Address, imported it in Icelandic, and then generated a 5,000 word structured critical response. Just today, I created a two and half hour narration of Larry Solum's selected download of the week from his legal theory blog — the original use case that inspired the app.

All these conversations and articles can be brought to life with good AI voices. And you can share what you've made — text and audio — with others, and, likewise, you can import what others have made. Once one of your colleagues, classmates, friends, or family has created things in enTalkenator, they can freely share them.

The generated conversations and articles are produced by templates in the app. It ships with several such templates, like the ones just mentioned. But you can edit them, make new ones, import others' templates, and share templates with others. You could make an outliner, an exam maker, an exam with answers maker, a heated argument, a peer reviewer, a doctoral defense, an "explain it to me like I'm five" generator, a fiction writer, whatever.

That's the magic part. And turning that magic into a working app was relatively easy. Turning it into a polished app (or as polished as I can make it at the moment, see below), well that's been a ton of work. But it's here. It works. I love it, and I think you will too, despite the downsides.

enTalkenator Kind of Sucks

So what's wrong with it? Well, first, setting it up is more fiddly than I'd like. This fiddliness is a classic barrier: deterring people from overcoming it, even though everything is easy to use once you've done so. Upshot is that to use enTalkenator with the various AI services, you need to enter your API key for those services.

You have to obtain these keys through webpages that are not directly connected to your existing subscriptions (if you have them). So even though you have a ChatGPT account, for example, you still need to go and set up your OpenAI API access, add some money to the account, and copy a key to paste into enTalkenator. It's actually super simple, but it doesn't sound like it to a new user. Yes, you only need to set up one of these, and, yes, once you set it up you don't have to think about it again until you want to add some more money to the account. But it'll turn many potential users off.

(An aside: I wish OpenAI, Google, and Anthropic (and DeepSeek) would call them App Keys — as they're the keys to being able to call the services from an app. As a developer, I think in terms of using their APIs. As a user, though, you're thinking of using them in the app. Even better would be a single sign-on api, so that I could just have the user sign-in to a service in my app. And even better than that, make App Access an option within the standard Anthropic and OpenAI subscriptions — all accessible from the same interface.)

I've spent a lot of time trying to reduce this friction, but it's still there. The alternative is to run my own server, handle a key or organizational key that the users securely retrieve and send, manage rate limits for the whole app, charge for in-app tokens that then cover the LLM bills, etc. etc. Upshot is that I'd be putting myself between the user and the LLM in a way that comes with all sorts of business and technical headaches and privacy implications. There are potential solutions on the horizon, but as of now I haven't found one that covers the globe, is secure and private, and allows me to deliver the app without ever collecting any data from the user at all. So API keys it is.

How much does it cost? Audio is provided only by OpenAI for now and is about 85 cents per hour. (If you don't opt to generate audio for a document, you can still listen to it using the iPhone's built in speech synthesizer. It doesn't sound great, but it's there. And you can opt to add AI audio at any time.) Generating text varies in price by model. Google's models can be used without charge (but you might have to wait longer, as the rate limits are lower for the free tier). Anthropic's Claude Sonnet 3.5 is still maybe the best text producer for many kinds of text. It costs about 40 cents to produce a transcript of a roughly 35 or 40 minute faculty workshop. It only costs about 11 cents to use the much-discussed DeepSeek R1 to do the same, and a few pennies to use DeepSeek's somewhat lesser model. It's free with the free tier of Google's Gemini.

When you ask enTalkenator to clean up an article for narration or ask it to generate a script, it gives you an estimate of the text and audio costs. If you're generating a workshop, say, you can turn the audio off and just generate a script. You can always hit the waveform button next to the workshop to generate audio later. And if you're loading in a PDF or copying in text from the web, you can turn both off and just get the text you selected or the text stripped from the PDF without AI processing. If what you want to do is generate workshops, oral arguments, or drafts from what you're adding (rather than listen to what you're adding in a clean, audiobook-like way) then don't clean it up at all, just generate. The AI that creates your conversations won't care if the text it's using is a bit messy.

The second issue that sometimes leads to enTalkenator suckitude is that relying on LLM services means relying both on network access and on the providers' abilities to maintain speedy uptime. DeepSeek, for example, has been down — or at least unreliable — for a few days as I write this. I cannot control that. And I cannot control the speed: sometimes the models are slow or you might hit rate limits that force the app to wait for a few seconds. Most of these services have "tiers" that give you higher rate limits the more you spend.

Nor can I precisely control whether the models generate amazing things or mere slop (more like enSlopenator...). Better models yield better results. And as models get better, faster, and cheaper, so will enTalkenator. My advice is to try all of them and see how it goes. I put a little money in several services and have had a blast testing them all.

enTalkenator Still Feels Like Magic

I still can't believe I'm able to use an app like this. It's like having a podcast app where conversations come from whatever you give it — no matter how ridiculous. I shared with my family the faculty workshop it generated based on this input: "On poo. By James Thompson. Poo poo and pee pee are funny. This is a serious academic work." While the workshop was more meta than I'd have liked, I could always have another go at it.

I've subtitled the app "a tool for thinking," because that's how I envision it and use it. Go deeper on anything by immersing yourself in the on-demand conversations and articles it makes about whatever interests you. Can it be superficial or wrong? Of course. But that's where you come in.

enTalkenator is free to use to generate transcripts and audio narrations of PDFs or text. (Obviously, you have to pay for the LLM services as described above, unless you're using Google's free tier.) It's also free to export and import enTalkenator files. So if someone else makes a bunch of conversations, you can import them and listen to and read them without subscribing to the app. It's $1.99 / month to use the conversation generation features.

Making enTalkenator an app rather than a web service is a bet that AI is going to get faster, cheaper, and better. The convenience and niceness of the app experience is now possible. I'm betting it's going to become ever more so.

I hope you try it out. It ships with sample content, and I'll probably have some .enTalkentor files you can download over on https://entalkenator.com. Happy generating!

Fair Roaming

Ownership is a sometimes-useful, imaginary concept. When it is used well, it's a practical way to describe and reason together about particular social situations, but, when not seen for what it is, the self-consciously imaginary becomes unselfconscious, collective hallucination. Copyright, in particular, has been so warped by forever-extension, that it might as well be land. And land might as well be soulbound. Even pop culture fans refer to their favorite stories as "brands" and "franchises." We've become habituated to the idea that everything, whether in the form of ideas or spaces, is owned by someone.

And so I guess it was predictable when the sudden and difficult question of how to incorporate generative AI into our lives arose, so many of us looked for what was owned. Without getting into it deeply: the applications built from transformers at large scale are amazing. I get that immediately relegating feats like blowing past the Turing Test (yes, I know the caveats) to the realm of the normal is maybe human nature these days. Maybe it's uncool not to respond with the "all it's doing is x" response of what looks like a learned and sophisticated insider. (Most such observations treat as fact what is mere, and to me doubtful, hypothesis.) But I won't succumb to the temptation to cultivate an air of sophistication in this way. No, we have a technology that already does seemingly magical things, things I'd have assumed would not be possible for the rest of my career or even life. And we don't yet know what returns to increased scale are possible or what new behaviors architectural changes may unlock, but we do know that the human brain does whatever it does on about 20 watts.

In the teeth of existential threats to some human occupations — no one, not even highly paid consultants, tech pundits, or valley tycoons, knows the extent of these threats on a medium or longer timescale — it has been a poorly-fitting appeal to ownership of training data, not human values or social welfare that has predominated the defense of human work. But the use of publicly available but copyrighted materials as training data ought obviously to be deemed fair use. Here's what would obviously be true of a small model that did not do the amazing things that are now scaring people: It's not any more of an infringement to feed some text into such a model than to feed it into your own brain. The best defense I've seen of the copyright infringement argument against training large models with publicly availble content is that what would be a fair use can cease to be one when the use occurs at a scale and in a social context that harms the copyrght holder, or at least in a way that soemhow seems unfair. But this all-things-considered balancing only points to the limits of the copyright-and-fair-use conceptual apparatus for discussing how these machines ought to be used. It is discourse within the ownership hallucination.

Our reflex to grasp at ownership of culture in order to eke out a place among the machines is a sign of an atrophied capacity for collective thought. We don't fight for the alleviation or avoidance of human suffering by stunting the development of machines through legal artifice. For one, it's not even possible to do so in the long run given that our law won't control every actor everywhere. The tech here isn't like building a nuclear weapon, and it will not take nation-state level resources to build and deploy it. But more importantly, it misdirects the conversation from collective, human values. And that's what worries me. We have in recent decades arrived at a law so overtaken with the rhetoric and conceptual tools tuned to the instincts and material desires of owners and bosses (but justified in the naturalizing rhetoric of equality and automony) that our response to an astounding technological advance, one as scary as it is magical, is to fight by pointing out what we "own." We're so accustomed to these misbegotten legal habits that even those who suffer from over-ownership decry the immorality of using "their" publicly availble texts to train a model. Immorality!

I'm fine using the word "ownership," understand its utility, even believe in its essential purposes in our law and society. But what do we really own. Nothing. All we have is ourselves, others, and the distinctive human experience of self and other together. If we do use the word, it should be in service of our collective conversation.

All Human Beings

One of the gravest mistakes I have found myself making is falling lazily into the belief that I am a good and well-meaning person. I try to be, of course. But one of the great gifts in my life over the last few years has been a recognition of the central importance of practice in becoming kinder and happier. It is not enough to recognize right and wrong or even to desire to do well for others – or even to do so regularly. Our default homeostatic existence is often at war with what we would do and be for ourselves and others if only we could see this life for what it is. For me, becoming truly more honest, generous, and compassionate requires a continual recognition that the mental models that are essential to making sense of my world and of getting on in conventional life can also be prisons, that actively working on my brain is necessary to grasp this fact at the level required to make a difference.

Your path and practice may differ from mine. Perhaps you have not been as often deluded or as often gripped by aversion or miswanting as I have been. But I was struck by the raw power of practice in my own life this morning while watching a video accompanying new music by Max Richter. I hesitate to refer people to books or talks that have been helpful to me in my practice, because the efficacy and depth of these things is so much a function of the intention of the listener. What may seem corny or cliche to the skeptical can work fundamental change in one who encounters it in the right way, at the right time.

So, with that proviso, try watching this video and saying aloud to each face that appears, without artifice or skepticism but with full belief – "My brother. My mother. My sister. My child." – as the person in front of you would be if you allowed it be so. I used to think making the world a more humane and peaceful place could not depend on individual moral transformation, as expecting that on a large enough scale was naive. Now, I believe, as strongly as I have ever believed in anything, that it is naive to think we can survive and thrive without it.

The Gun Subsidy

I have been working with a student co-author on an article that formalizes and improves upon a proposal I have made in the past to begin to solve our gun violence crisis. We have a ways to go. But the essence of this proposal is what we frame as "the gun subsidy." Its twin functions are (a) to make explicit what is now implicit: that the costs of gun production are subsidized by victims of gun violence and (b) to shift a very small portion of those costs to gun manufacturers (moderately reducing their subsidy) in order to achieve a more sane political economy on this issue. We want those who know guns best to use that knowledge to help solve the problem.

I include below a portion covering the major points of the proposal. Some details may change, and I do not include what we have been working on with respect to implementation.


The Gun Subsidy

Guns are used to kill about 40,000 Americans each year. They are instruments of suicide, of domestic and workplace rage, of robbery, and of spectacular acts of domestic terrorism. This American carnage, as the president put it on the occasion of his inauguration, can indeed stop. While it is unrealistic, in a country of over 300 million people, to believe we can eliminate all interpersonal violence, it is equally absurd to insist that mass shootings and thousands of gun suicides are as inseparable from our landscape as oxygen.

The gun violence problem is not one of human nature but of social organization. The minds and experience that could best be directed to reducing gun deaths are instead consumed with fending off any and all gun regulation. This dynamic has caused extensive damage not only to victims of violence but also to our body politic. Indeed, the gun debate has become so caricatured and at the same time so stagnant that it has fostered in too many the insidious belief that our greatest problems are beyond our ability even to address. From it has grown a cynicism that politics cannot ever be responsive to social problems. The gun debate is a cancer that has spread to other vital issues. The critical step toward progress is promoting a shared store of facts and a shared effort to minimizing social harm.

We propose a first step that centers directly on the political problem. It is not a list of guns to ban or background checks to be performed. Before all else, we must begin rowing in the same direction, and there is a way to accomplish this critical first step: liability. We do not mean liberalizing ordinary private liability, with the attendant lawsuits, discovery, and punitive damage awards. Rather, we propose an unambiguously required and automatic payment by a gun manufacturer to a special fund after one of its guns causes a death. In particular, subject to some details discussed infra, for each person killed by a gun, the gun’s manufacturer would pay $6 million to a federal fund administered by the Centers for Disease Control.

Calling for liability rules in response to social harms may hardly seem novel or sufficient. This reform, though, would not be the end of our effort to stem gun violence, but a necessary beginning that would unlock further rational policymaking. If a substantial portion of the costs of gun violence fell on gun manufacturers, two things would follow. First, and more conventionally, manufacturers’ cost-benefit calculations would drive them to manufacture guns less likely to cause deaths that would lead to payment obligations. But we do not advance this proposal as a means to achieve some sort of law and economics ideal of an “efficient” amount of violence. Rather, the second and more important effect would be a political economic one, turning gun manufacturers from the fiercest opponents into advocates for effective regulations concerning background checks, gun attachments and ammunition, retail sales, and other potentially violence-reducing targets.

There is a bit more to our proposal than this, though. Billing the gun industry for even a modest portion of the social harms it creates would almost surely bankrupt it entirely. A Pigouvian tax would be, as things now stand, a death sentence. Even with the discounting we will propose, the total liability at current levels of gun violence would amount to well over $120 billion on an industry whose domestic private sales revenues are probably less than $20 billion. It is doubtful gun manufacturers could raise prices and alter designs and sales to achieve a reduction in liability sufficient to survive in the short term.

The obvious and normal response to this concern is that imposing liability only reveals a basic economic truth that has existed all along. The industry is not worth its costs. If its customers would refuse to pay prices sufficient to cover all the costs of manufacture, including the cost of violent deaths, then the market in its aggregate voice is telling us not to manufacture guns. One of us favors listening to this voice, but we live in a country in which many do not and in which they cite a Second Amendment they strongly believe requires private gun availability in fact and not only in theory.

This, then, is the second part of our proposal: a Gun Subsidy. From the base, per-death liability payment following a gun death, the CDC would discount at a rate calculated at regular intervals to permit the continuing manufacture of weapons adequate for self-defense within the meaning of Heller v. District of Columbia, while continuing to apply adequate pressure on manufacturers to reduce gun mortality. The amount of the subsidy should represent that portion of our collective valuation of the availability of the Heller right that is not reflected in individual acquisitive preferences.

The combined effect of these provisions, manufacturers’ strict liability to a fund and the Gun Subsidy, is to make at least somewhat explicit what is now entirely implicit and, in fact, invisible in its budgetary implications. Guns cause pain and death even as they bring pleasure to those who enjoy them. We now count that pain and death as no cost at all when collectively deciding through the market how many and what kinds of guns to manufacture and to whom to distribute them. Just as a particular gun cannot be made without acquiring and charging for metal and labor, so too its manufacture and sale cannot be severed from the deaths it will cause or from the collective enjoyment of the constitutional right its availability has been deemed to protect. And yet neither of these latter two values is priced, considered, or widely known.

In the first Part, we describe the mechanics of fund liability. In the second, we summarize its main justifications, averting to standard tort theory (and the additional benefits of this proposal over private tort suits) and to liability’s political economy consequences. In the third Part, we discuss some implementation details. And in the fourth, we argue that the fund would not violate the Second Amendment, as it was understood in Heller, or other constitutional provisions

The Proposal

Guns are the means by which almost 40,000 Americans die each year. 40,000 is a useful number as a yardstick of risk in the United States. It’s roughly the number of people who die annually in car accidents. It’s a little less than the number of people who died from opioid overdoses in 2016. It is about the number of suicides. It’s a little more than the total of all pre- and post-natal infant deaths. It’s roughly a quarter of all deaths from all accidents. And it’s between one and two percent of all deaths. These figures are approximate, but 40,000 deaths seems to mark the cost of one social problem after another.

It seems an understatement to note that Americans have widely varying intuitions about the costs and benefits of gun ownership. The best evidence is that keeping guns is, all things considered, somewhat risky. That said, we all do lots of risky things, and if the worst risks guns imposed was a heightened risk of suicide and accidental death, then maybe gun ownership would fall in the same category as smoking or motorcycle riding: things most people believe adults should be able to do if their eyes are open to the dangers.

But guns impose enormous costs that are not born entirely by gun owners and not at all by gun manufacturers. These costs are measured in medical bills, death, and grief. The one thing everyone can agree on is that this level of suffering is horrible and that it would be good to eliminate it.

What we tend not to agree on is how to measure the benefits of gun ownership. One of us would, if he had no humility about the importance others might attach to guns, would ban them entirely and even confiscate the existing stock without compensation. He believes guns are not even close to being worth their cost, that they make safety-obsessed owners much less safe, and that the fantasies they engender of fending off either bad guys or (even more ludicrously) a tyrannical government are unhealthy. But he does understand that have important and unknown-to-him meanings for others and that more careful analysis of the “how maintained” and “what kinds of guns” questions could, possibly, point toward an acceptable regime of private gun ownership.

It is precisely in such a circumstance–large but uncontroversial costs offset by controversial and pluralistically understood benefits–that a tax of some form can decentralize the production and distribution questions in a manner less injurious to the public good. Asymmetrical uncertainty is not an obstacle to good public policy. We need not know “the one right solution” to optimal gun production and distribution to make a boring suggestion that will help us all: If gun manufacturers had to pay the costs of gun deaths, then many good things would begin to happen.

Our proposal:

  1. Automatic Liability to the Gun Safety Fund: Gun manufacturers are required to pay $6 million for a death caused by a firearm they manufacture. The manufacturer would be liable not to a private party but to a federal fund, which could be called the Gun Safety Fund and be administered by the Centers for Disease Control and Prevention. Liability would be automatic and avoided only when the death is the result of a legitimate use of force by a law enforcement officer or an exercise of justifiable self-defense. Such defenses to payment could be raised in an administrative hearing before the CDC (and appealed from there as any other administrative adjudication). There would be no private plaintiffs’ attorneys, no fights over punitive or compensatory damages, comparative negligence, discovery, or any of the usual but often necessary sources of inefficiency in litigation. The form of liability would be closer to a death tax than a tort judgment.

  2. The Gun Subsidy: The CDC will be charged initially with determining an amount that will be refunded to the manufacturer following payment to the fund that is necessary to preserve the practical availability of guns to be kept for purposes identified in Heller as protected by the Second Amendment, erring on the side of over-subsidizing. Every two years, the amount of the subsidy paid as a refund will be reduced by 2%, unless the CDC determines there is a reasonable likelihood that production would fall below the Heller baseline described above. The upshot is that after a century the subsidy would be a little more than 1/3 of its initial amount. The CDC will annually publish and publicize statistics gathered on gun violence and highlight the amount of the year’s Gun Subsidy.

The details, of course, matter. For example, we would make the findings of responsible medical examiners concerning which gun caused a death (and whether it did) conclusive for these purposes, and it would be a federal offense for any agent of a firearms manufacturer to attempt to influence such an examiner. We would also discount the payment owed for gun suicides, not because such lives are less valuable but to require payment only for the excess number of successful suicides caused by guns. That is, the payment would reflect the number of suicides over and above what that number would have been if only alternative methods of suicide were available. We would also require a quadrennial determination by the CDC of this figure through the normal informal rulemaking process. These and other details are covered more fully in Part II.

Fund liability is not intended to be a perfect Pigouvian tax. At each point, we have chosen to calculate the liability using lower bounds. The total amount of the payments we propose would be dramatically less, in aggregate, than the cost of actual harms flowing from the use of guns. For one, it would only require payment for deaths and not for injuries, which number more than twice the number of deaths. And $6 million is less than what most agencies identify as the monetary value of a human life for cost-benefit analysis purposes. But perfect internalization of externalities, a theoretically dubious proposition for reasons well-trodden by Ronald Coase, is not the point. Any significant tax on manufacturers that scales with death will lead manufacturers to take some steps to reduce the tax, both manufacturing and political. It is the direction of social effort that concerns us most, not accounting.

Even this heavily discounted cost internalization, however, is likely too large for the gun industry to absorb. Gun manufacturers’ total revenues from private sales in the United States are probably around $15 billion and almost surely less than $20 billion, with profits of just a billion or two. Even if we assume a total discounting of suicide deaths and that payments would be owed for only half of other deaths, say 6,000 of the 40,000 gun deaths, the aggregate payment would be $36 billion. Despite low-balling the harms again and again, the industry does not come close to being able to cover the costs it imposes. The Gun Subsidy must, therefore, initially be massive if the industry is to be kept afloat. Reducing the subsidy over time, with some degree of certainty, will enable the industry to plan, redesign, alter marketing, work with state governments to implement better laws, and perhaps to participate in gun buy-backs. The responses are difficult to predict as non-experts, and that is the very point.

Benefits

A. Standard Tort Theory

First, the obvious: If manufacturers must pay for deaths caused by guns they manufacture, at least some of the costs of gun violence, accidents, and excess suicides would be spread over all gun owners rather than born primarily by victims and secondarily by society at large. That seems both fair and an appealing political argument in favor of shifting costs. Why should victims pay for the downsides of gun ownership? Why should we subsidize gun manufacturers who stand alone in reaping all the profits of their activities but not a very substantial portion of their costs? Higher retail gun prices would result from the automatic payment regime, and these higher prices would reduce the rate of gun ownership, but only rationally so. Of course, if you can manufacture a safer gun, it will incur less liability and so can be made cheaper. People will therefore be more likely to purchase safer guns.

All this is a traditional sort of argument for strict liability. Put the costs of injury on the entity that could most cheaply avoid or minimize them and you wind up with a system that more optimally balances costs and benefits. And so, on this ground, we might be inclined to repeal the Protection of Lawful Commerce in Arms Act, which, with some exceptions, shields gun manufacturers and dealers from liability for injuries arising from crimes committed with their products. We do not favor that and believe that the automatic CDC payment should be the exclusive form of liability.

[Reasons include the benefits of regularized expectations for manufacturers, more immediate cost imposition, reduction of transaction costs, the fact that compensation is a possible use of the Fund, additional certainty may breed more stable changes in manufacturer political behavior, etc. More to come here. ]

This novel form of liability is not designed to achieve the most “economically efficient number of gun deaths.” We both believe the right number of such deaths is zero. But while there are many possible solutions to reducing gun violence, our nation has eschewed all of them. For this reason, we would settle for less than optimal. Our problem is getting anything at all done in the face of powerful incentives to do nothing. To do so, we could try to get the gun manufacturer to think differently about its social role. And that, rather than mere cost-consciousness in its role as vendor, is the most important virtue of this proposal.

B. Political Economy

The payment regime’s most important effect, and one that we hope would have positive spillover effects on other political issues, would be to make gun manufacturers willing participants in social efforts to stem gun violence. When you are the one who will pay the cost of a bad outcome, you become directly concerned with preventing that outcome. Liability gives us a chance to flip the prevailing political script and to get those who know these weapons best to think hard about how to stop their being used to kill in large numbers.

Yes, manufacturers would seek to manufacture safer guns and to advertise and market in ways that minimize the risk of death. These are the vendor-specific effects of a tax. But they would also be far more likely to advocate for state and federal legal restrictions on gun ownership and sales, background checks, enforcement, and public health research. For the riskiest guns, manufacturers might support or even engage in gun buy-backs.

Because it is uncertain, especially from the perspective of those of us unfamiliar with guns and their manufacture, what the most effective mix of regulation and prohibition might be, we should align incentives so that those who do have expertise reveal it. To be clear, we shouldn’t tax gun deaths because we think that the amount of the tax is what life is worth. Nor is the purpose of a payment requirement to suggest that a manufacturer’s moral duty to the killed and maimed has been discharged with a financial transaction. Rather, the goal is to alter the organization of social forces in such a way that we, both gun violence prevention advocates and gun enthusiasts, begin to strive for the same goal, even if we continue to disagree about means. By putting some of the costs of guns back on their manufacturers, there might even arise a new National Rifle Association that is committed to researching and identifying effective regulations. After all, manufacturer lobbies lobby for manufacturers.

There is, we believe, potentially a further benefit of this proposal, though it is harder to quantify. While many of us may not be able to imagine making a living manufacturing assault rifles, people are different. We cannot ignore that people do in fact make these weapons for reasons that others may not completely understand and that they do in fact pay nothing for the deaths that result from their work. Internalizing these costs could change the way gunmakers understand their work, perhaps, helping them break free of the ideologically pure and oppositional politics that have corrupted their relationship to the community. Forcing a change in conceiving of the social effects of one’s business from “not my concern” to “my job is making sure that never happens” is a laudable goal on virtue ethics grounds. And while forcing payment will in the first instance change incentives, it just might, in the second instance, change minds and attitudes.

[Administrative implementation details and analysis of Heller will follow.]

A civilization cannot long exist that fails to respond deliberatively to urgent social problems. It is a damning indictment of ours and a great challenge to our existence as a great democracy that we did not respond to the mass-murder of twenty first-grade students in their classroom and six teachers and school workers. And the murders have continued. Democracy is hard work, and ours must find a way to ensure that social problems are perceived, that deliberation is had, and that efforts to solve them are implemented. The process of perceiving, considering, and responding, after all, is what distinguishes the actions of an intelligent being from the mechanics of a clod of earth.

The proposal here is optimistic. It posits that we can be better collectively if only our decision-making were organized in such a way that we engage the proper facts and lacked incentives to treat others as valueless. Perhaps we are wrong, and our worst instincts resist the moderating influence of political structures engineered to bring out our best. But it is worth trying to become better.

The Way Forward on Supreme Court Appointments

The nominations process for the Supreme Court is broken. Whatever the origins of this crisis, it reached a point of no return when Mitch McConnell determined that the Senate would refuse to consider any nominee put forth by then-President Obama to fill Justice Scalia’s seat. All pretense of a norm of deference to the President on appointments having been abandoned and a total commitment undertaken to do whatever it takes to dominate the Court, McConnell cemented us to a nominations regime of pure and naked calculation. There is, of course, no last strategic act, and a rational response to McConnell’s gambit is to appoint enough justices to achieve a progressive majority as soon as progressives take the White House and Senate. And then, one should expect the same response from conservatives. Ian Ayres and John Witt have suggested expanding the Court temporarily in a rebalancing move. But I’m skeptical, for a number of reasons having to do with legal realism and the nature of the GOP coalition, that we will ever return to a stable, norms-guided regime.

Nor should we. Just as there’s nothing right about McConnell’s historical obstruction, there’s also nothing particularly right about the fact Justice Scalia’s seat became available in 2016 rather than 2017. It is not based on any principle of justice or democracy that a member of the Court should die in one year rather than another. It is difficult to identify a theory of representation that the current appointment procedure serves well. If you think, as I do, that justices should represent the people as they are constituted over longer stretches of time than legislative or executive politicians, then you would want them to serve long terms and to be insulated from reprisals and incentives from those shorter-term representatives.

But with longer lives, relatively young appointees, the ability strategically to retire, and the fact that nine is small number relative to a justice’s expected term, the Supreme Court does not meet this representational desideratum. When President Trump’s appointment is seated, conservative justices will maintain their 5-4 majority on the Court. Since Justice Thomas was appointed in 1991, Republicans have controlled the White House for about 11 years. Democrats have controlled the White House for 16 years. Conservative justices have held a 5-4 majority on the Court for every moment of those 27 years. One could of course add to the years of Republican control any number of years prior, but that hardly justifies single-party domination of the Court unless one takes a curiously specific position on the temporal distribution of control of the two branches. More importantly, though, I raise this only to suggest that it is difficult to defend the current practice of lifetime appointments to a very small body, where turnover is either gamed or the random product of death. McConnellism is merely the nail in the coffin. Our fundamental problem is that appointments are either strategically or randomly available and that they are so few that their wattage overwhelms our politics and, lately at least, has caused us to be far less than our best civic selves.

To do better, we need a neutral plan that makes control of the Court turn on future elections and that contains a transition rule acceptable to both sides. That’s why I’ve proposed a 28th Amendment, the text of which you can read here. Solving this problem has three critical components: (a) a workable institutional structure, (b) a reliable appointment procedure, and (c) a clear and acceptable transition procedure. I intend with this amendment to provide all three.

Here are the key institutional features:

  • The Court will have 18 justices.
  • Each justice serves an 18-year term and then becomes available to sit by designation on lower courts or to do other work within the judiciary. So life tenure in the judiciary is preserved, but a life-long seat on the Court is not.
  • A justice departing early is replaced by the usual appointment procedure but only serves the term of the departing justice.
  • The Court may hear cases in panels and en banc.
  • Larger numbers decrease the importance of each individual justice, and the potential for a tie is a feature and not a bug.

And here is the appointment procedure:

  • Each year, the president nominates a justice to replace the outgoing justice.
  • The Senate may reject a nominee within 45 days of nomination if at least 60 members vote to do so. The Senate now has a time limit and must act affirmatively to block an appointment.
  • After three rejections, the Supreme Court will review the nominees and return to the Senate its judgment as to which nominees are professionally qualified. It will continue to do so for each nominee thereafter.
  • Once there are three Court-certified nominees, the Senate has 30 days to pick one of them. If it fails to do so, the president can pick any one of the three without Senate approval.
  • Upshot: there is a check on the appointment of the corrupt and the crazies, but the president will almost certainly achieve an appointment each year.

The most critical element of any restructuring of the Court is a transition rule to which otherwise antagonistic parties can agree. The rule I propose keeps the current members of the Court and treats them as though they had been appointed according to the above system. The additional vacancies will be filled, proportionately and separately by each of the political parties in the Senate. It may sound complicated at first, but the guiding light is that it generates a Court that reflects control of the White House during the 18 years prior to adoption. Here’s how it works:

  • Justices appointed more than 18 years prior to ratification are treated for purposes of the term limit as though they had been appointed at the earliest possible date by the president of the same political party that had appointed them. This would go in order of seniority so that the most senior Republican-appointed justice would be deemed to have been appointed in the first possible year he could have been appointed by a Republican president. And so on.
  • Justices appointed within the past 18 years will be deemed appointed in the year they were actually appointed, but if that year is unavailable then the next year in which there is a same-party vacancy. If there is more than one such justice, the first appointed will be deemed appointed in that year. In other words, we fill out the available slots by seniority, working forward from each justice’s actual appointment year. If there is no vacancy, then the most senior justice of that party is deemed retired and the process is begun again. Any justice who cannot be assigned an appointment year by this method is deemed retired.
  • Actually applying the procedure makes it plainer. If the Amendment were adopted now, we would need justices to fill slots beginning in 2001 and ending in 2018 (18 justices, one per year). We begin by assigning appointment years to the justices appointed more than 18 years ago: Thomas, Ginsburg, and Breyer. Thomas is the most senior and is a Republican appointee. The earliest available slot for a Republican appointee is 2001, when George W. Bush was president, and so Thomas is deemed appointed in 2001 and would step down in 2019. Ginsburg would be deemed appointed in 2009, the first available appointment year for a Democratic appointee. Breyer, then, would be deemed appointed in 2010.
  • Next, we turn to the justices who have been appointed in the past 18 years. Roberts was appointed in 2005 and Alito in 2006. Both of those years are available, and so both are deemed appointed in their actual appointment years. Sotomayor was appointed in 2009, but that year is unavailable, because Ginsburg has been deemed appointed in that year. The next year in which there is a same-party vacancy is 2011, because Breyer has been deemed appointed in 2010. Thus, Sotomayor will be deemed appointed in 2011, and Kagan, because she was appointed in 2010, will be deemed appointed in 2012. Gorsuch’s actual appointment year is 2017, and that year is available, as is 2018 for any Trump appointee filling Justice Scalia’s seat.
  • We now have nine vacancies corresponding to various appointment years. These would be filled as follows. Any vacant appointment year will filled by a justice selected by a majority of Senators of the same political party as the president for that year. So Senate Republicans would make appointments for the years 2002-2004 and 2007-2008, and Democrats for the years 2013-2016.
  • Note that the procedure above handles more exotic configurations of the Court, sometimes forcing retirements, but always matching the political composition of the Court with control of the White House during the prior 18 years.

Here is the transitional Supreme Court:

  • 2001: Thomas
  • 2002-2004: 3 new Senate GOP appointees
  • 2005: Roberts
  • 2006: Alito
  • 2007-2008: 2 new Senate GOP appointees
  • 2009: Ginsburg
  • 2010: Breyer
  • 2011: Sotomayor
  • 2012: Kagan
  • 2013-16: 4 new Senate Democrat appointees
  • 2017: Gorsuch
  • 2018: new Trump appointee

In 2019, Thomas steps down and is replaced by a Trump appointee. In 2020, the first new GOP Senate appointee steps down and is replaced by a Trump appointee, etc. The result is that GOP-appointees would hold a 10-8 majority until 2021, when the new president would begin making appointments. This preserves the status quo until the next presidential election, on which a Supreme Court majority will turn in predictable fashion. But, also, with greater numbers there is the chance that political majorities on the Court will be more tenuous and less ideologically rigid.

I believe that the nominations crisis that became impossible to ignore with the blockade of Merrick Garland presents an opportunity to create a more representative Court and an appointments process less prone to degrading our political virtues. My amendment is one way forward, and I would love to debate its merits and alternatives.

Amendment XXVIII: A First Draft

Section 1. Article III, Section 1 is hereby repealed. The authority granted in Article II, Section 2 to the President to nominate and to appoint, by and with the advice and consent of the Senate, judges of the Supreme Court is hereby revoked.

Section 2. The judicial power of the United States, shall be vested in one Supreme Court and in such inferior courts as the Congress may from time to time ordain and establish. The Supreme Court shall have the power to hear cases before panels of some of their number and en banc, according to procedures it establishes. A resolution by a panel of the Supreme Court shall be deemed a resolution by the Supreme Court, unless it thereafter reviews the resolution en banc.

There shall be eighteen Justices of the Supreme Court, each of whom shall serve an eighteen-year term as an active Justice. Thereafter, a Justice may continue to serve by designation on lower courts and otherwise to support the judiciary. The judges, both of the supreme and inferior courts, shall hold their offices during good behavior, and shall, at stated times, receive for their services, a compensation, which shall not be diminished during their continuance in office.

Section 3. Upon a vacancy on the Supreme Court, a Justice shall be appointed by the President after nomination, unless the Senate disapproves by a vote of 3/5 of its number within 45 days of notification of the nomination. In case three nominations for a vacancy are disapproved, the Supreme Court shall pass on the professional qualifications of the disapproved nominees and any disapproved nominees for the vacancy thereafter. When the Supreme Court has returned to the Senate three qualified nominees, the Senate shall have 30 days to confirm the appointment of one of them, else the President shall appoint from among them.

A Justice who, by reason of death, retirement, removal, or otherwise, departs active service before the end of the Justice’s eighteen-year term shall be replaced according to this appointment procedure, except that the appointee shall serve as an active Justice for only the remainder of the departing Justice’s term.

Section 4. A Justice serving at the time of the ratification of this Amendment and whose term has otherwise expired shall, in order of seniority, be deemed to have been appointed in the first year during which the presidency was held by the same political party as the Justice’s appointing President and in which no Justice senior has been deemed appointed. If there is no such year, the Justice is deemed retired.

Any other Justice shall, in order of seniority, be deemed appointed in the year the Justice was in fact appointed, but if another Justice senior has been deemed appointed that year under this Section, then the Justice is deemed appointed in the next year during which the presidency was held by the same political party as the Justice’s appointing President and in which no Justice senior has been deemed appointed. If there is no such year, then the most senior Justice appointed by a President of the same party is deemed retired and the appointments shall proceed under this Section without that Justice.

There shall be a transitional appointment procedure by which any vacancies that exist at the time of ratification are filled. For any year of the 18 years prior to ratification in which no appointment was made or deemed made by this Section, a majority of those Senators belonging to the political party of the President in office for the largest portion of that year shall appoint a Justice, who will thereafter be deemed to have been appointed in that year. Any vacancy arising within two years of ratification from the retirement of a Justice serving at the time of ratification shall be filled by this transitional procedure if the Justice's term has not expired.

References in this section to political parties do not create any novel structural role for political parties other than expediently and acceptably constituting this transitional procedure.

Overcoming Gun Violence

[Note: This post elaborates an idea Joe Miller and I explored on an episode of Oral Argument. That discussion is in places more detailed and in places less.]

This American carnage, as the president put it on the occasion of his inauguration, can indeed stop. While it is unrealistic in a country of over 300 million to believe we can eliminate all interpersonal violence, it is equally absurd to insist that mass shootings and thousands of gun suicides are as inseparable from our landscape as oxygen. To shout down even the possibility of change is not only ignorant and unimaginative, it’s callous.

To say that there is no solution to this new and deadly parade of spectacular violence is a grievous insult to all those who struggled before us, and against much greater odds, for justice and for survival. Our founders, our revolutionaries, our heroes, from Washington to Harriet Tubman to Lincoln to MLK, of course they didn’t end forever the risk of upheaval or destroy for all time all social ills. But they gave to us a fighting chance, one that is now ours to blow. Have we grown so inept and passive that the instant an actual challenge confronts us we pronounce the task politically insurmountable? Again, what a shocking insult such an attitude is to those who have come before us. We must not only try to fight evil in our time, but, more fundamentally, we must resolve to organize ourselves to do so. And we can.

Our primary problem here, as with too many other issues, is not one of human nature but of social organization. The minds and experience that could be directed to reducing gun violence are instead consumed with fending off any and all gun regulation. This dynamic has caused extensive damage not only to victims of violence but also to our body politic. I do not believe in seeking an end to politics, a perpetual bipartisanism. No, it’s important and good that we disagree with one another vehemently about things that matter. But the gun debate has become so caricatured and at the same time so stagnant that it has fostered in too many of us the insidious belief that our greatest problems are beyond our ability even to address. From it has grown a cynicism that politics cannot ever be responsive to social problems. The gun debate is a cancer that has spread to other vital issues, and it must be cured.

I propose a first step that centers directly on the political problem. It is not a suggestion of guns to ban or background checks to be performed. Before all else, we must begin rowing in the same direction, and there is a way to accomplish this critical first step: liability. Not private liability, with lawsuits, discovery, and punitive damage awards, but an unambiguously required and automatic payment by a gun manufacturer to a special fund after one of its guns causes a death. This change would not be the end of our effort to stem gun violence, but a necessary beginning that would unlock rational policymaking. A civilization cannot long exist that fails to respond deliberatively to urgent social problems. It is a damning indictment of ours and a great challenge to our existence as a great democracy that we did not respond to the mass-murder of twenty first-grade students in their classroom and six teachers and school workers. And the murders have continued. Democracy is hard work, and ours must find a way to ensure that social problems are perceived, that deliberation is had, and that efforts to solve them are implemented. The process of perceiving, considering, and responding, after all, is what distinguishes the actions of an intelligent being from the mechanics a clod of earth.

I Don’t Know Anything About Guns

Guns are the means by which almost 40,000 Americans die each year. 40,000 is a useful number to use a yardstick of risk in the United States. It’s roughly the number of people who die annually in car accidents. It’s a little less than the number of people who died from opioid overdoses in 2016. It is about the number of suicides. It’s a little more than the total of all pre- and post-natal infant deaths. It’s roughly a quarter of all deaths from all accidents. And it’s between one and two percent of all deaths. These figures are approximate, but – see here for details – 40,000 deaths marks one social problem after another.

Now if you’re a proud gun enthusiast, you and I are not going to have the same intuitions about the costs and benefits of gun ownership. The evidence is that keeping guns is, all things considered, somewhat risky. That said, we all do lots of risky things, and if the worst risks guns imposed was a heightened risk of suicide and accidental death, then maybe we could put gun ownership in the same category as smoking or motorcycle riding: things adults should be able to do if their eyes are open to the dangers.

But guns impose enormous costs that are not born entirely by gun owners and not at all by gun manufacturers. These costs are measured in medical bills, death, and grief. The one thing everyone can agree on is that this level of suffering is horrible and that it would be good to eliminate it.

I want to compromise. You see, I care nothing for guns. I know little about them other than what I’ve read and what I’ve learned watching PUBG matches on Twitch. I’m not a gun guy. If it were up to me and if I had no humility about the importance others might attach to guns, I’d propose we ban them entirely and that we confiscate the existing stock without compensation. Sounds extreme, right? Well, I don’t believe they are even close to worth their cost, that they make safety-obsessed owners much less safe, and that the fantasies they engender of fending off either bad guys or (even more ludicrously) a tyrannical government are unhealthy.

But I do understand that guns have important and unknown-to-me meanings for others and that more carful analysis of the “how maintained” and “what kinds of guns” questions could, possibly, point toward an acceptable regime of private gun ownership. How do we get there?

Automatic Liability to a Fund

If you suggest an assault weapon ban, gun people, in my experience, immediately assail the idea as ineffective and reflecting profound ignorance of what guns are and how they work. Whatever. I’ll concede that I just don’t know much about guns. I’m not the right person to decide whether and how guns and gun sales could be safer. But the beauty of economics and thoughtful politics is that I don’t have to know “the one right answer” to optimal gun production and distribution to make a boring suggestion that will help us all:

If gun manufacturers had to pay the costs of gun deaths, then a number of good things would begin to happen.

I propose that gun manufacturers be required to pay $6 million for a death caused by a firearm they manufacture. The manufacturer would be liable not to a private party but to a federal fund, which could be called the Firearm Safety Fund and be administered by the Centers for Disease Control and Prevention. Liability would be automatic and avoided only when the death is the result of a legitimate use of force by a law enforcement officer or an exercise of justifiable self defense. Such defenses to payment could be raised in an administrative hearing before the CDC (and appealed from there as any other administrative adjudication). There would be no private plaintiffs’ attorneys, no fights over punitive or compensatory damages or comparative negligence or discovery or any of the usual but often necessary sources of inefficiency in litigation. This would be closer to a death tax than a lawsuit.

The details, of course, matter. For example, I would make the findings of responsible medical examiners concerning which gun caused a death (and whether it did) conclusive for these purposes, and it would be a federal offense for any agent of a firearms manufacturer to attempt to influence such an examiner. I’d also probably discount the payment owed for gun suicides - not because such lives are less valuable but to require payment only for the excess number of successful suicides caused by guns – i.e., the number of suicides over and above what that number would be if only alternative methods of suicide were available. See, e.g., chapter two of Liza Gold, Gun Violence and Mental Illness. I’d perhaps require a bi- or triennial determination by the CDC of this figure through the normal informal rulemaking process.

This is not intended to be a perfect Pigouvian tax. The amount of the payment I suggest would be significantly less, in aggregate, than the cost of actual harms flowing from the use of guns. It would only require payment for deaths and not for injuries, which number more than twice the number of deaths. And the $6 million figure is less than what most agencies identify as the monetary value of a human life for cost-benefit analysis purposes. But perfect internalization of externalities, a theoretically dubious propositions for reasons well trodden by Ronald Coase, is not the point.

The Ordinary Benefits

First, the obvious: at least some of the costs of gun violence, accidents, and excess suicides would be spread over all gun owners rather than born primarily by victims and secondarily by society at large. That seems both fair and an appealing political argument in favor of shifting costs. Why should everyone and especially victims pay for the downsides of gun ownership? Why should we all subsidize gun manufacturers who stand alone in reaping all the profits of their activities but not a very substantial portion of their costs? Higher retail gun prices would result from the automatic payment regime, and these higher prices would reduce the rate of gun ownership, but rationally so. Of course, if you can manufacture a safer gun, it will incur less liability and so can be made cheaper. People will therefore be more likely to purchase safer guns.

All this is a traditional sort of argument for strict liability. Put the costs of injury on the entity that could most cheaply avoid or minimize them and you wind up with a system that more optimally balances costs and benefits. And so, on this ground, we might be inclined to repeal the Protection of Lawful Commerce in Arms Act, which, with some exceptions, shields gun manufacturers and dealers from liability for injuries arising from crimes committed with their products. I do not favor that and believe that the automatic CDC payment should be the exclusive form of liability. That’s because I think it would be a cleaner and more certain way to regularize the expectation of manufacturer cost.

I’m not suggesting this novel form of liability in order to achieve the most “economically efficient number of gun deaths.” There are many possible solutions to reducing gun violence, and we have eschewed all of them. I’d settle for less than optimal. No, our problem is getting anything done at all when there are powerful incentives to do nothing. And I want the manufacturer to think differently about their social role.

The Promise

The payment regime’s most important effect, and one that I hope would have positive spillover effects on other political issues, would be to make gun manufacturers a key and willing participant in stemming gun violence. When you are the one who will pay the cost of a bad outcome, you become directly concerned with preventing that outcome. Liability gives us a chance to flip the script and to get those who know these weapons best thinking hard about how to stop their being used to kill in large numbers.

Yes, manufacturers would seek to manufacture safer guns and to advertise and market in ways that minimize the risk of death. But they would also be far more likely to advocate for state and federal legal restrictions on gun ownership and sales, background checks, enforcement, and research. For the riskiest guns, manufacturers might support or even engage in gun buybacks.

Because I am not sure what the most effective mix of regulation and prohibition might be, I want to align incentives so that those who do have expertise reveal it. To be clear, we shouldn’t tax gun deaths because we think that the amount of the tax is what life is worth and that if you can pay then death is fine, but, rather, because it would alter the organization of social forces in such a way that we begin to strive for the same goal, even if we continue to disagree about means. By putting some of the costs on guns back on their manufacturers, we might even wind up with a new NRA that is committed to researching and identifying effective regulations. After all, manufacturer lobbies lobby for manufactures.

Questions

“What about the Second Amendment?” Read Part III of Scalia’s opinion in District of Columbia v. Heller. For example: “[N]othing in our opinion should be taken to cast doubt on longstanding prohibitions on the possession of firearms by felons and the mentally ill, or laws forbidding the carrying of firearms in sensitive places such as schools and government buildings, or laws imposing conditions and qualifications on the commercial sale of arms.” He also strongly suggests that “weapons that are most useful in military service—M-16 rifles and the like—may be banned.”

“Why do you equate the lives of children with money?” I do not. The purpose of a payment requirement is not to suggest that a manufacturer’s moral duty to the killed and maimed has been discharged with a financial transaction. Personally, I cannot imagine making a living manufacturing assault rifles. But people are different, and we cannot ignore that people do in fact make these weapons and do in fact pay nothing for the deaths that result from their work. I believe that internalizing these costs would force a change to the way they understand their work, breaking free of the ideologically pure and oppositional politics that have, in my view, corrupted their relationship to the community. Forcing a change in conceiving of one’s business from “not my concern” to “my job is making sure that never happens” is the goal. And while forcing payment will in the first instance change incentives, it just might, in the second instance, change minds and attitudes.

“But with this number of deaths, even discounting for suicide, the industry might be on the hook for over $150 billion!?!?” The costs of gun violence are shockingly high aren’t they.

“This is a ridiculous suggestion because gun manufacturers won’t be able to pay these astronomical costs and stay in business.” Drop absolutely everything else you are doing, find a quiet place, and think very, very hard about what you just said.

Lamentation

It’s not that death is unexpected. Even untimely, it demands acceptance. Perhaps the most odious snark is to criticize how others mourn a passing. I won’t do that. This year of avulsion has wrenched our future from the familiar channels of our politics, our nostalgia, and our efforts to mean something.

Deaths aren’t the only occasions for existential confrontation with ourselves. Maybe we’re struck upon seeing the surface of another planet or reading about the sterilizing jets of a gamma ray burst. But these only extend to further realms of the unimaginable the truth we learn more directly when struggling through the sands and forests of terrestrial wilderness: We are not the universe’s conceptual center. What is yet still harder won is to feel, rather than just to think, not that we are within the universe but that we in fact are the universe, our separateness an illusion and our sensed connections a pale but suggestive reflection of reality.

Jedediah Purdy warns against taking too far the belief that reality is a continuous fabric, its people, rocks, and stars not discrete phenomena but conceived as such by the mind - and this, the mind’s construction, as much an undifferentiated ripple as falling rocks or calving glaciers. As he puts it, we may be tempted, especially in this moment, to combat the myopia of self-interest by believing “biological identities are possible only because of aliens within us, the bacteria and portmanteau cells that form our so-called selves.” But this, he reminds us, is “inadequate because it does not take seriously ... that democratic community is utterly real, as real as dirt, because we are trapped in it, because the facts we majoritarian bandits choose become the facts we live with every day.”

And that is indeed the brute fact, that we do suffer, that we do fear, and that we do thrill and love. Even though we are the universe, this universe that we are imagines alternatives to the causes and effects that mark its temporal shape. It imagines joy and suffering, the very real, grounded states we believe are our own. In culture, as well as in law, it expresses as a humming multitude of minds all aware of one another, a hall of mirrors.

The deaths this year have come as repeated blows to this collective imagination. So many talents, so many hauntingly beautiful and wonderfully flawed people have left us. They stand in even greater relief against the electoral victory of Trump, a triumph of fear over imagination itself. His toddler instincts are so obviously the unrepressed failures of introspection that we all sometimes recognize bubbling up within ourselves. He secretes them as infantile demands to be adored, to be the most powerful, and to get the last hit, demands the rest of us usually damp through inner, reflective conversation. It feels too much to bear that his repeated, embarrassing blatherings are treated as important, even as we mourn the passing of adult lives of such full scope.

From music, to art, to science, to film, and even to goofy TV shows whose decades-old cathode beams still illuminate our adult minds, our culture and its pioneers are shadowy representations of the true fact of our togetherness. Their genius is ours. Their failings, ours. To say this is to engage in more than collective claiming, it is to restate the ultimate truth. While our universal body regularly sheds its skins, mostly escaping similarly universal notice, we find ourselves now ridden with cancer and wishing them back, that our body would cease its sloughing and keep warm by a hearth we wish were there.

A Politics of Decency

The greatest pleasure of my career as a law professor has been engaging with students and colleagues of diverse ideologies and backgrounds. I don’t just tolerate those who hold opposing policy preferences and core beliefs, I love them, and I have been deeply affected by conversation, laughter, and serious argument with so many. I hold no grudges and demand no deference, but I also pay the respect to those whom I teach and teach alongside not to pretend that I have no commitments or opinions. We are stronger not in spite but because of such frankness when it is accompanied by good will. I write now because this central experience in my life and the very promise of an enlightened Republic are, I fear, in grave danger. But we can do this together.

Donald Trump is immoral and indecent, and both his authoritarian tendencies and his narcissism threaten our values and institutions. I have been greatly heartened at various points along this darkest timeline by the many decent conservatives who have stood up and said no.

Take Evan McMullin, with whom I respectfully but adamantly disagree on issues ranging from the causes and significance of the national debt, health care reform, and guns. Evan, though, has been the leader we now need through his articulation of something more basic than these issues, something that should unite us all against Donald Trump. On Twitter, he has called out Trump’s conflicts of interest, his seeming alliance with authoritarians, and his lack of concern for the most vulnerable. His solution:

Now we have the opportunity, in fact the need, to claim the common ground that I know is there. That common ground is liberty & equality.

I agree. I propose thinking about our common ground in terms of the most essential way it rejects Trumpism and that encompasses a general commitment to liberty and equality: basic decency.

A politics of decency:

  • rejects authoritarianism, even as it embraces meaningful disagreement concerning the metes and bounds of federalism and regulation;
  • abhors sexual assault and misogyny, even as it recognizes differences in how best to combat these problems;
  • will not stand for profiteering from public office, even as it contains a wide range of views on campaign finance and on the subtler questions of proper and improper influence;
  • repudiates scapegoating the poor and vulnerable, even as its participants differ on the proper way to fund our civilization and the relative burdens that should be assessed;
  • condemns lying to the people, even as it does not purport to deliver judgment on the trustworthiness of various conventional politicians and whether various instances of spin go too far;
  • refuses to tolerate white nationalism and religious and sexual bigotry, the discarded ideologies that animated the most shameful and violent episodes of our past, even as the debate will continue over, for example, how best to overcome the badges of slavery and to be color-blind without also being blind to the lingering effects of racial castes;
  • and stands against government officials who bully individual citizens, even as it encourages serious debate among its participants concerning the merits of their ideas.

Donald Trump is indecent. And if he cannot learn to stay within the guardrails of what decency requires, he must either be voted out in disgrace at the first available opportunity or impeached and removed should his transgressions go that far. We will not accept a lower bar for the conduct of Trump simply because the whole world’s expectations are already so low.

Instead, we will stand together, conservatives, liberals, and whoever, to demand that our local politicians and, especially, House representatives, hold Donald Trump strictly accountable to the demands of decency. We will attend our representatives’ town halls, participate in marches for unity, resist assaults on our core values, and help one another in the best traditions of our nation. And we will continue to do this even when some among us engage in anarchy or otherwise attempt to hijack our efforts to advance particular causes. We will condemn these distractions but not ourselves become distracted from doing what is necessary to preserve the soul of this nation.

I hope you will demand that we reject the emerging global axis of authoritarianism and that we not throw away so cheaply that which has taken 240 years of struggle to build. In basic decency to one another is the path to preserving that degree of liberty and equality we have inherited. And in that same decency, we will find the common ground on which to debate vigorously but with love how best to realize liberty and equality in the future.