Connecticut Supreme Court Reviews AI-Generated Legal Hallucinations
- •Connecticut Supreme Court evaluates dismissal of case involving AI-generated fake legal citations.
- •Law firm GLG Law admits to filing brief with hallucinatory cases created by generative AI.
- •State and federal courts consider mandatory certification for lawyers using AI-assisted research tools.
The intersection of technology and the bench has reached a critical juncture in Connecticut, where the state's Supreme Court is weighing whether to dismiss a case due to 'hallucinated' legal citations. In a landlord-tenant dispute, attorneys submitted a 60-page brief containing fictional cases that appeared legitimate but lacked any legal basis. This phenomenon, where models produce factually incorrect yet confident-sounding information, is now at the center of a debate over professional accountability.
The errors were identified by students at Yale Law School's Jerome N. Frank Legal Services Organization. Their brief argues that fabricated citations are inherently unfair to opposing parties, particularly those with fewer resources who cannot verify every reference in an extensive filing. The firm involved, GLG Law, admitted to the errors, explaining that while AI was used for formatting, 'hallucinated' content was introduced without their knowledge during the review process.
This incident reflects a growing trend of 'AI malpractice' across the national legal landscape. In response, the U.S. District Court in Connecticut has issued a 'no-tolerance' policy regarding briefings that misstate the law through AI-assisted research. Meanwhile, state court committees are deliberating mandates requiring attorneys to formally certify they have verified the accuracy of any citation produced by artificial intelligence.