A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

© 2024 McGivney, Kluger, Clark & Intoccia. ALL RIGHTS RESERVED. by: Jonathan Ciottone of McGivney, Kluger, Clark & Intoccia, P.C. For more on Artificial Intelligence, visit the NLR Communications Media Internet section.

  • Related Posts

    Tax and Disclosure Considerations Related to Executive Security Benefits

    Key Takeaways Executives and companies may deduct the cost of security benefits that meet certain requirements under the Treasury Regulations Public companies are generally required to disclose the cost of…

    Congress Passes Defense Bill with AI Provisions — AI: The Washington Report

    On December 18, Congress passed the FY 2025 National Defense Authorization Act (NDAA), which includes a number of AI provisions. The NDAA is expected to be signed into law by…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Dow Jones Today: Stocks Inch Higher To Start Holiday-Shortened Session

    • By admin
    • December 24, 2024
    • 12 views
    Dow Jones Today: Stocks Inch Higher To Start Holiday-Shortened Session

    Tax and Disclosure Considerations Related to Executive Security Benefits

    • By admin
    • December 24, 2024
    • 9 views

    Dow Jones Today: Stocks Slip To Start Holiday-Shortened Week; Honda Soars on Merger Plans

    • By admin
    • December 23, 2024
    • 13 views
    Dow Jones Today: Stocks Slip To Start Holiday-Shortened Week; Honda Soars on Merger Plans

    Congress Passes Defense Bill with AI Provisions — AI: The Washington Report

    • By admin
    • December 22, 2024
    • 17 views

    Why People on TikTok Are Slathering Their Face with Beef Tallow

    • By admin
    • December 21, 2024
    • 15 views
    Why People on TikTok Are Slathering Their Face with Beef Tallow

    Texas Attorney General Launches Investigation into 15 Tech Companies

    • By admin
    • December 21, 2024
    • 18 views
    Texas Attorney General Launches Investigation into 15 Tech Companies