SEEKING LEGAL ADVICE: ARTIFICIAL INTELLIGENCE (“AI”) IS NOT A TRUSTWORTHY RESEARCH ENGINE IN THE LEGAL PROFESSION

15 January 2025

In the recent case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Other (“the Mavundla case”) where the Applicant applied for leave to appeal against a judgment delivered on 16 August 2024 in the High Court of South Africa, KwaZulu-Natal Division, Pietermaritzburg, by filing a notice and supplementary notice of leave to appeal. It was discovered by the presiding Judge that the supplementary notice of appeal filed by the Applicant on 6 September 2024 contained nine references to case authority, of which only two could be found to exist, albeit that the citation of one was incorrect.

20250206_KEBD_WEB ARTICLE 2.0

The candidate legal practitioner and the legal practitioner acting on behalf of the Applicant were granted an opportunity to furnish proof of the existence of the cited cases. The cases could ultimately not be produced. 

The presiding Judge in her judgment referred to the case of Chetty v Perumaul [2021] ZAKZPHC 66 where it was held that the legal practitioner’s duty to court ‘requires that lawyers act with honesty, candour and competence . . . lawyers must not mislead the court and must be frank in their responses and disclosures to it’. The presiding Judge went further to reiterate that the same principle applies to candidate legal practitioners.

The presiding Judge further referred to an article by Associate Professor M. van Eck titled “Error 404 or an Error of Judgment? An Ethical Framework for the Use of ChatGPT in the Legal Profession”. The aforementioned article provides a detailed discussion on the legal position on using AI technologies in legal research, especially with regards to ChatGPT in South Africa and various other jurisdictions. 

Van Eck noted that although ChatGPT promises effectiveness and benefits in the legal sector, it is unreliable due to the fact, that “information produced in the response prompts has been shown to be fabricated or fake, especially when such prompts relate to legal information”.

Van Eck concluded the article by forewarning legal practitioners that they are responsible for the information provided and must verify the information that has been used, regardless of the source. As the failure to verify information may lead the breach of ethical and professional duties imposed on legal practitioners.

In conclusion the presiding Judge noted that in her view, relying on AI technologies when doing legal research is irresponsible and downright unprofessional.  

By Elizna Meyer (Director) | Litigation Department

Assisted by Dineo Makhoali (Candidate Attorney) and Naledi Nkadimeng (Candidate Attorney) | Litigation Department

Disclaimer:

All information and material published on this website is for the purpose of general information only and does not constitute nor must it be construed as legal advice. Specialist legal advice must be sought in respect of a specific legal matter. We accept no liability, damage, loss and/or responsibility, whether direct or consequential, for any actions taken or failure to take any actions, based on the content of any such publication.

© 2025 Klagsbrun Edelstein Bosman Du Plessis Inc. All rights reserved.