The legal profession is beginning to confront a predictable but uncertain challenge with generative AI.
A recent lawsuit in the United States alleges that a claimant used ChatGPT to generate legal filings in an already settled case, prompting accusations that the system effectively enabled the unauthorised practice of law. Whether the claim succeeds or not, this episode highlights a growing issue for the legal sector — consumer AI tools are not designed for regulated legal work.
Public AI systems are powerful, but they operate in open environments with limited transparency over how outputs are generated or verified. For many industries that may be acceptable. For law firms, it is not.
Legal services rely on trust, traceability and professional accountability. Advice must be grounded in verifiable sources. Decisions must be owned by qualified professionals. And firms must be able to demonstrate how information has been generated, reviewed and used.
This is why the conversation in the legal sector is shifting away from simply "using AI tools" and towards deploying private AI environments within the firm.
Rather than relying on public chatbots, firms can implement controlled AI systems that operate within their own secure infrastructure or enterprise cloud environments. These systems can be connected to internal precedent libraries, case materials and document repositories, while maintaining strict governance over how data is accessed and how outputs are produced.
In practice, this means AI becomes part of the firm's internal knowledge and workflow systems, rather than an external tool generating unverified responses.
The work North Stack does with law firms follows that principle. AI is used to accelerate document review, analyse large case files, summarise complex material and surface relevant precedent. But the technology is designed to support legal professionals, not replace their judgement.
Outputs are logged, auditable and reviewed by lawyers before anything is relied upon or shared externally. This distinction is critical.
AI can significantly reduce the time spent on repetitive legal tasks and help lawyers navigate increasingly complex information environments. But it should never be positioned as a substitute for legal advice or professional responsibility.
For law firms operating in regulated environments, the real challenge is not whether to adopt AI. It is how to adopt it safely, with clear governance and professional boundaries.
That means moving beyond open consumer tools and towards controlled AI systems built around the way legal organisations actually work.
Because in the legal profession, trust is the foundation of everything. The technology supporting it should be held to the same standard.


