CLEO: Using AI to help self-represented litigants safely tell their story in court
Mar 31, 2026
The problem
Across Ontario, thousands of self-represented individuals rely on CLEO’s Guided Pathways to navigate intimidating legal procedures. While these rules-based decision trees are excellent at assembling complex forms, the resulting text often lacks narrative flow and appropriate emphasis. These disjointed narratives make it difficult for users to persuasively tell their story to the court, a critical disadvantage in urgent matters like emergency family law motions.
The challenge
Integrating Generative AI into public legal tools requires an exceptionally high standard of accuracy, safety, and privacy. The system had to improve narrative quality without omitting critical facts, fabricating details, or crossing the line into providing unauthorized legal advice. Furthermore, Tangowork needed to build a maintainable architecture capable of orchestrating complex, evolving prompts across numerous legal pathways, while ensuring user autonomy and strict data security.

The Narrative Assistant’s "Highlight Changes" feature flags every AI-generated edit, addition, or omission with a pop-up explanation, ensuring users retain full control over their legal narrative. (Note: all the personal information shown is fake.)
The solution
Tangowork designed and developed the Narrative Assistant, a custom AI application powered by OpenAI’s GPT models and integrated with Langfuse for rigorous AI observability and prompt management. The tool transforms rigid decision-tree outputs into clear, logical, and persuasive affidavits. To ensure user trust, Tangowork engineered a transparent “Highlight Changes” feature that explains every AI edit , alongside a three-part Suggestion Framework (Claim Checker, Story Checker, and Evidence Checker) that safely prompts users to supply missing facts or evidence.

Tangowork integrated Langfuse to centralize prompt management, track automated LLM-as-a-judge evaluations, and maintain rigorous observability across the application.
The results
Consistently improved the clarity, structure, and argumentative strength of legal narratives compared to baseline rules-based outputs.
Effectively eliminated hallucinations by anchoring the AI with structured decision-tree inputs and redundant guardrails.
Enhanced user trust and autonomy through the transparent Highlight Changes feature, which automatically marks and explains all AI modifications.
Successfully generated safe, highly relevant follow-up suggestions that helped users strengthen their claims without the system providing unauthorized legal advice.