E-Articles

Artificial Intelligence Looks at Law as Data and Not as Law (Part 2)

In our previous article, we explored the significant transformation brough about in the legal industry by Artificial Intelligence (AI), perceiving “the law as data”, which has brought about significant transformations in the legal industry. However, it is crucial to fully grasp the phrase “AI sees the Law as data” as “AI sees the Law as data but not as Law”. While AI serves as a valuable tool in the legal field, it is essential to acknowledge the challenges it faces in practical legal applications, particularly in niche areas of law or smaller jurisdictions. Drawing a parallel, let’s consider the example of a knife: while it is an indispensable instrument in culinary settings, it can also pose potential danger if not used appropriately.

While technology has made significant advancements, it still faces challenges when dealing with complex legal knowledge, especially in niche areas of law or smaller jurisdictions. According to Neil Sahota, the essence of AI lies in the training process provided to the machine which depends on supplying the system with data and algorithms to discern patterns, make predictions, or accomplish specific tasks. In simple terms, despite being fed with vast amount of data, AI still lacks critical thinking abilities and a practical understanding of the law. It relies solely on detected patterns and does not possess the same understanding of the law as a human lawyer. This results in a garbage-in-garbage-out scenario, hindering its effectiveness in addressing complex legal matters.

  • Accuracy:

Acquiring high-quality data for an AI system poses a challenge in the legal field. While diverse sources are used, ensuring accuracy becomes difficult once resources such as published books, Wikipedia articles, and a refined “Common Crawl” repository are exhausted. Unlike human lawyers who learn from handpicked reliable sources, AI models are fuelled by both labelled and unlabelled data, potentially leading to erroneous outcomes. Additionally, if trained on inaccurate data, AI models may generate hallucinations or fabricate facts.

Interestingly, AI systems are only as good as the data on which they are trained. If the data is inaccurate, the AI system will also be inaccurate. A New York lawyer is facing possible sanctions after citing fake cases generated by OpenAI’s ChatGPT in a legal brief filed in federal court. The incident occurred in a personal injury lawsuit against Avianca, where the lawyer used ChatGPT to supplement his legal research. However, the judge discovered that six of the cited cases were bogus, leading to doubts about the reliability of the lawyer’s sources. The mistake gained media attention and prompted discussions about the need for verification when using AI-powered tools in legal research. Therefore, one should approach AI as a helpful starting point, rather than a definitive source.

AI system may also have limited awareness of different legislation and jurisdictions. In a niche area or a small jurisdiction, AI models might not be effectively trained to address specific needs. Therefore, legal professionals should exercise caution when relying solely on AI-generated content for legal drafting. It is important to consider the limitations and potential inaccuracies of AI models and ensure that human expertise and verification are incorporated into the process.

  • Bias Concerns:

Similar to their human counterparts, AI systems can exhibit biases. Biased training data or algorithms design can result in unfair treatment of certain individuals or groups. Using historical data that reflects past mistake can lead AI systems to replicate these biases.

AI systems can learn from the data they are trained on. This means that if an AI system is trained on data that is biased, the AI system will also be biased. For example, if an AI system is trained on a dataset of legal cases that predominantly favours men, the AI system may be more likely to recommend that men be given lighter sentences than women.

The bias in AI systems can damage public trust in the legal system. For example, if people believe that AI systems are biased against them, they may be less likely to report crimes or cooperate with law enforcement. Addressing this challenge requires mechanisms for detecting, measuring, and mitigating biases in AI systems, ultimately promoting fairness and equity in the legal field.

  • Transparency:

Understanding the decision-making process of AI systems is crucial in the legal field. Transparency builds trust among legal practitioners, clients, and the public. By providing clear explanations and justifications for AI-generated outcomes, we can ensure ethical and responsible use of AI. Achieving transparency involves explainable AI techniques, documentation of AI processes, and external audits of AI systems.

Despite these challenges, the use of AI in the legal field is expected to continue growing. AI has shown us the potential to pave the way towards AI Lawyer but not as a replacement of human expertise and judgement, yet. Even though there are many examples online where AI falls short, the reality is that it is very suitable for handling legal tasks under the supervision of a knowledgeable legal expert.

While AI cannot replace human expertise and judgement, it serves as a valuable tool that enhances the capabilities and efficiency of legal professionals in the digital age.

 

*****

About the author:
This article was written by Lim Yong Lin, Trainee Lawyer – law firm in Kuala Lumpur, Malaysia.
 
The view expressed in this article is intended to provide a general guide to the subject matter and does not constitute professional legal advice. You are advised to seek proper legal advice for your specific situation.

Let LPP Law be Your Legal Advisors

Contact Us illustration
Drop us a message and let us better understand your needs. Get your first consultation within 24-hours, absolutely free of charge.
Share this article:
THESE MIGHT INTEREST YOU:
electronic-signature

Digital Signature and E-Signature

Is electronic signature legally recognised by law? Digital Signature and Electronic signature (E-Sign) may be used interchangeably, to refer a signing tool for signer to

Want more content like this?

Drop us your email and be the first to know when we have more informative contents on the latest legal updates, just like this one.

LPP Logo White

A boutique corporate & commercial law firm in Kuala Lumpur.

MLA 2020 Badges (Finalist)
ALB Badge 2020 - Malaysia Rising Stars_
ALB Badge 2020 - 40 Under 40

FREE Legal Updates

Sign up for our newsletter to get the latest updates, happenings and goodies!
We don't spam, promise.

 © Copyright 2020, Lee & Poh Partnership

Responsibilities of Executor:

  • Apply for and extract the grant of probate.
  • Make arrangements for the funeral of the deceased.
  • Collect and make an accurate inventory of the deceased’s assets.
  • Settling the debts and obligations of the deceased.
  • Distributing the assets.

Note for Digital Executor:
If you wish to leave your digital assets to certain people in your Will, there are important steps that need to be taken to ensure that your wishes can be carried out:

  • Keep a note of specific instructions on how to access your username and password of your digital asset.
  • You are advised to store these private and confidential information in a USB stick, password management tool or write them down.
  • Please inform your executor or a trusted person of the whereabouts of the tools so that they will have access to your digital asset.