Friday, November 17, 2023
ChatGPT-4 Passes Multistate Professional Responsibility Exam (MPRE), Beats Law School Graduates And Other Generative AI Models
Gabor Melli (LegalOn Technologies), Daniel Lewis (LegalOn Technologies) & Dru Stevenson (South Texas; Google Scholar), Generative AI Passes the Legal Ethics Exam:
Can Generative AI Pass the Legal Ethics Exam?
Earlier this year, research found that GPT-4 could surpass law students in passing the Uniform Bar Exam. Our study builds on this discovery, testing whether generative AI models can also navigate the rules and fact patterns around legal ethics.
- We challenged OpenAI's GPT-4 and GPT-3.5, Anthropic's Claude 2, and Google's PaLM 2 Bison to 100 simulated exams, composed of questions crafted by Professor Stevenson to model the Multistate Professional Responsibility Exam (MPRE).
- GPT-4 performed best, answering 74% of questions correctly, an estimated 6% better than the average human test-taker.
- GPT-4 and Claude 2 both scored above the approximate passing threshold for the MPRE, estimated to range between 56-64% depending on the jurisdiction.
- Legaltech News, Gen AI Passes the MPRE With No Prior Ethics Training
- Reuters, AI Chatbot Can Pass National Lawyer Ethics Exam, Study Finds
https://taxprof.typepad.com/taxprof_blog/2023/11/chatgpt-4-passes-multistate-legal-ethics-exam-mpre-beats-law-school-graduates.html