AI in Alberta Litigation: Courts Demand Human Oversight and Accountability
December 18, 2025
By HMC Lawyers LLP
Artificial intelligence (AI) is rapidly transforming legal practice and the broader business environment in Alberta. As AI tools become more prevalent, Alberta’s Courts are grappling with new challenges, including the use of generative AI in legal submissions that reference non-existent case law.
The Lawyer on Record is Ultimately Responsible for AI Misuse
In Reddy v Saroya, 2025 ABCA 322 the Alberta Court of Appeal addressed the professional risks of using generative AI to prepare written submissions. The appellant filed their initial factum on December 16, 2024, while the respondent’s factum was filed on January 16, 2025 and identified a potential AI Issue. The appellant’s factum contained several citations to cases that could not be found. When contacted, appellant’s counsel stated that the cases existed, but there were errors in the citations.
At the oral hearing, the appellant’s counsel explained that illness, a busy schedule, and the holiday season contributed to their failure to properly review the original factum and recognize the issue with the cited cases. The factum was drafted by a third-party contractor who claimed not to have used generative AI.
The Court referenced Rule 3.1-2 of the Law Society of Alberta’s Code of Conduct, requiring lawyers to perform all legal services the standard of a competent lawyer, and lawyers should develop an understanding of relevant technology. The Law Society of Alberta published “the Generative AI Playbook”, a resource described as a starting point for Alberta’ lawyer seeking to harness the benefits of disruptive technologies, like generative AI, while safe guarding their client’s interests and maintaining professional competence. The Court emphasized that when AI is used without safeguards, it can introduce confusion and delay into proceedings and may constitute an abuse of process that could bring the administration of justice into disrepute.
The Alberta Courts issued a Notice to the Public and Legal Profession dated October 6, 2023 titled “Ensuring the Integrity of Court Submissions When Using Large Language Models” (the “October 2023 Notice”). The Court of Appeal re-emphasized the principles of this notice: parties must rely on authoritative sources (such as official court websites, commonly referred commercial publishers, or well-established public services like CanLII) when referring to caselaw, and the involvement of a “human in the loop”, human verification of any work produced by generative AI. Parties are expected to budget the time to cross-reference the output of AI. If engaging with a third-party contractor to assist with drafting, the lawyer on record bears the ultimate responsibility for the material’s form and contents and compliance with the October 2023 Notice.
The consequences of not complying with the October 2023 Notice are at the discretion of the panel or judge. Counsel and self-represented litigants should not expect leniency for non-compliance; possible remedies include striking submissions, or the imposition of cost awards against the party who failed to adhere.
Leniency is Not Afforded to Self-represented Litigants Who Misuse AI
Key Takeaways
- Lawyers and self-represented litigants must verify the output of any generative AI used when preparing their written submissions.
- The lawyer on record is ultimately responsible for the misuse of AI, even when using a third-party contractor for drafting.
- Time constraints, illnesses or busy schedules do not excuse failure to verify citations.
- The Court is considering enhanced costs against counsel personally for failing to confirm the accuracy of cited authorities and may impose further sanction for future breaches.