AI Use in the Legal Profession: a cautionary tale from Justice Myers

AI and its applications in the legal profession are a hot topic of discussion in firms worldwide. Our partners, associates, and operations departments are currently inundated with emails and phone calls from sales representatives offering AI-based software solutions making hefty claims about their capabilities. The reality, however, is that we still have so much to learn about AI, its applications, and its limitations, especially in the context of practicing law. Many of the AI tools that we currently have access to as consumers in this field are still in their infancy and need to be treated with due caution and healthy skepticism.

There have been two recent decisions by Justice Myers out of Toronto that relate to the use of AI generated documents in Court, that iterate the caution that needs to be used by all legal professionals when using such tools:

·         Ko v. Li, 2025 ONSC 2766 (CanLII), https://canlii.ca/t/kbzwn

·         Ko v. Li, 2025 ONSC 2965 (CanLII), https://canlii.ca/t/kc6xx

In these cases, counsel for the Applicant, Ms. Lee, submitted a factum which contained AI hallucinations. This refers to AI creating false or misleading content – in the context of law, AI relies on or draws “facts” from cases that are either wholly made up, or the AI draws a false conclusion from an existing case. After this was discovered during the motion hearing, Justice Myers, on his own initiative, ordered a show cause hearing, directing Ms. Lee to attend before the Court to show why she should not be cited for contempt.

Ms. Lee subsequently (and correctly, in my opinion) “fell upon her sword” before the Court. She withdrew the factum and resubmitted a corrected version. She explained the oversight as her law clerk drafting a factum using ChatGPT, which she alleged was unbeknownst to her, which she then signed and submitted without checking the authorities. By signing it, she took responsibility for its contents. Justice Myers accepted the explanation, the additional remedies suggested by Ms. Lee (including additional CPD and an agreement not to charge her client for the motion), and purged any contempt that might have been found. While he did not assign any additional penalties to her, he noted that the damage done to her reputation after an otherwise exemplary 30 years of practice was effectively punishment enough, though additional sanctions were open to him.

It is unclear whether the Law Society of Ontario will get involved or is involved already.

This is a cautionary tale for us all. AI can be very useful as a drafting tool to work alongside a lawyer, but it cannot supplant the role of the lawyer. The outputs of AI need to be carefully screened for accuracy, and pleadings need to be carefully examined by lawyers before they leave the office – and most importantly, before anything is filed with the Court. The same caution should apply to solicitors when drafting and sending out agreements, conducting research, providing legal opinions, and similar.

Advancements in technology are meant to enable us to do our jobs more efficiently. They are tools for us to use, but in no way do they replace training, experience, and professional expertise. It will be up to us as legal professionals to decide how we choose to use AI. It could advance our profession immensely, but if used recklessly, could expose practitioners to negligence and allegations of falling short of our ethical duties to clients and to the Court.

Article by Joshua Laplante

Joshua is a Partner at Cohen Highley, he is a litigation lawyer with an exclusive focus on estate disputes, power of attorney issues, guardianship matters, and the administration of estates.

 

Vera Dokter

Vera Dokter

LAWYER

Featured Articles

How to Connect With Us