×
08
September 2025

Hallucinated Justice: When AI Gets Lawyers into Trouble

In 2023, a New York lawyer made headlines not for his eloquence or courtroom triumph but for citing fictional US Court decisions generated by ChatGPT. The consequences were severe: disciplinary proceedings, a fine, public embarrassment, and a warning to the legal profession worldwide.[1] This incident exemplifies the perils of using AI in litigation.

Poland’s Guidebook for Attorneys Using AI

As AI tools become increasingly integrated into the legal workflow (drafting motions, summarising documents, and predicting outcomes), litigators must balance efficiency with accountability. In Poland, the National Chamber of Legal Advisers has responded with comprehensive recommendations, urging lawyers to embrace AI cautiously and ethically. [2]

According to the 2025 publication AI in the Work of an Attorney-at-law, generative AI can significantly enhance legal productivity: summarising case law, preparing reports, assisting with translations, and even generating preliminary contract drafts. AI tools promise time savings and scalability, especially for sole practitioners and small firms.

Risks and Prevention

However, the same report highlights serious risks. Chief among them is the phenomenon of “AI hallucinations”: plausible-sounding but fabricated content. When a legal brief contains invented citations, the lawyer, not the machine, bears responsibility. Ethical rules are unambiguous: all filings must be verified, and reliance on AI cannot absolve a lawyer from due diligence.

Three useful key principles for using AI have been highlighted by Poland’s National Chamber of Attorneys:

  1. Transparency – clients should know if and how AI is being used in their case.

  2. Verification – outputs generated by AI must be reviewed for accuracy and legality.

  3. Confidentiality – sensitive data cannot be casually uploaded to open AI models.

EU AI Act

The 2025 EU AI Act imposes additional compliance obligations, especially for systems deemed “high risk”.[3] Legal professionals must assess whether their AI tools meet transparency, security, and data protection standards. Violations could lead not only to disciplinary sanctions but also to regulatory penalties.

Chances and Challenges

The above recommendations are clear: AI is a tool, not a substitute for legal judgment. As with cloud computing years ago, the legal profession must adapt and always use human oversight. The profession stands at a technological crossroads – lawyers who use AI wisely will gain a competitive edge, while those who do so recklessly may face reputational or legal disaster.

The cautionary tale from New York is no anomaly. It is a preview. The challenge is not whether to use AI but how to do so without compromising professional integrity.

Written by Michał Wiewióra and originally published by GGI.


Footnotes

[1] https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/https:/www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt/

[2] https://kirp.pl/rekomendacje-dotyczace-korzystania-z-ai/

[3] https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_enhttps://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng