Technical overview of GenAI legal chatbot, Beagle+

by Chris McGrath | Feb 24, 2025

Beagle+ is the generative AI legal chatbot that Tangowork built for People's Law School.

In January and February 2025, Tangowork and People's Law School presented a case study of Beagle+ to over 1500 legal professionals in 3 separate sessions, including at the Legal Services Corporation's 2025 "Innovations in Technology" conference in Phoenix, Arizona, and to the AI Policy Consortium of the National Center for State Courts, organized by Thomson Reuters.

Below is a condensed re-recording of that session.

On-demand webinar: Technical overview of Beagle+ legal AI chatbot

What you will learn in this webinar

1. Why we built Beagle+

  • Most Canadians struggle to get legal help when they need it

  • AI can provide fast, accessible legal information

  • Beagle+ was designed to give high-quality legal answers using AI

2. How Beagle+ works

  • Uses OpenAI's language model combined with our legal content

  • Retrieves the most relevant legal information using a vector database

  • Every response is based on real, trusted content

3. How we ensure accuracy

  • AI doesn’t remember past conversations—so we send it the right info at the right time

  • We tested 42 challenging legal questions before launch

  • Accuracy improved from 70% (pre-launch) to 99% through prompt refinements

4. Manual and AI review of conversations

  • Human lawyers reviewed 5300+ conversations

  • 99% of conversations were legally accurate

  • 54 problem conversations were corrected by adjusting content or improving the system prompt

  • Now we use AI to identify high-risk conversations for human review

5. Lessons from real users

  • 79% of users found Beagle+ helpful

  • Users ask complex legal questions that challenge AI

  • Every flagged issue led to improvements in content or prompts

6. How to build a great AI chatbot

  • Use a vector database to store and retrieve knowledge

  • Automate database updates—don't manage content in 2 places

  • Conduct manual tests with real-world questions

  • Use LLM-as-judge testing to complement manual testing