AI and the Courts in 2025
Where are we, and how did we get here?[1]
- The scope of this paper will be relatively limited, given the breadth of the subject matter of Artificial Intelligence (or AI) generally, and the recent explosion in technological development. I will seek to give an overview of three areas; the first being a general discussion of what is meant by AI in the context of legal practice, how the Courts are currently handling the issue, and then some ethical aspects for lawyers and judges arising out of the use of AI.
The history of artificial intelligence
- While it feels like AI has been a recent phenomenon, the concept of machines producing outcomes which replicate human cognitive processes has been with us for centuries. In fact, in Gulliver’s Travels,[3] published in 1726, there is a description of a wonderful machine” which was described by its creator as enabling creativity:
Every one knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.[4]
- Thomas Bayes developed his framework for probability reasoning in 1763, and George Boole’s The Laws of Thought,[5] a monograph on algebraic logic laying the groundwork for what later became known as Boolean algebra, was published in 1854.
- Eventually these theories started to find physical form - Nikola Tesla’s first radio-controlled ship with its “borrowed mind” was demonstrated in 1898, and in 1914 the first chess-playing machine was invented - El Ajedrecista, “the chess player”, which was capable of independently playing an endgame with the king and rook against the king from any position without any human intervention”.[6]
- The term “artificial intelligence” was coined in a proposal for a study proposed by researchers from academia (Harvard and Dartmouth College) and industry (IBM and Bell Telephone Laboratories).[7] The workshop took place in 1956 and generally what we know now as artificial intelligence is viewed as dating from then. Interestingly for the way in which AI has progressed, the proposal included a study on “Randomness and Creativity”, saying:
- Without going into too much detail, there then began a spirited academic debate about whether AI would in fact be able to replicate human cognition,[9] or whether it was inherently limited.[10] Philosophers and cognitive scientists weighed in.[11] That debate continues today.
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.[8]
What is AI, and what can it do?
- We are now at a time when the capabilities of AI are readily and easily accessible. A simple canter through one of the generative AI tools freely available - ChatGPT by Open AI (https://openAI.com/index/chatgpt) can demonstrate the wide array of information which can be retrieved through a conversational system of prompts and revisions.
- The NASA definition of AI is “the use of computer programs to undertake complex tasks usually done by humans – reasoning, decision making, creating.[12] A good AI programme can learn from experience and improve performance when exposed to data sets. It is designed to think like a human and to act rationally and to approximate a cognitive task. Generative AI can be defined as “software systems that create content as text, images, music, audio and videos based on a user’s prompts”.[13]
- This talk is mainly focused on generative AI – but there are many forms of AI which can be utilised in other ways, many of which are freely available or bundled with paid apps or programs. They include:
- AI Assistants (or Generative AI) – ChatGPT, Grok, Jasper (writing assistant). These are often chatbots which are popular and easily accessible iterations of generative AI.
- Video generation tools – Synthesia
- Project management – Motion
- Meeting note taker and assistant - Otter
- Image generation tools – Midjourney
- Workflow and productivity – Microsoft CoPilot (integrated with Microsoft 365 apps)
- Specialised legal research – Lexis Nexis, Westlaw, new operators all the time
- What does generative AI do? The general description I will use in this paper is as follows:
- Use of deep learning computer models that can generate high-quality text, images and language content based on the data on which it was trained.
- Recognises patterns and makes statistical predictions about the best result to a prompt
- Generally responds to prompts in a conversational way
- There are many definitions available of Artificial Intelligence on Google or other search engines. Again, thinking about relevance of results of searches, “ai” is also a word for a three-toed sloth. We are very good at looking at this result and discarding it as irrelevant to the artificial intelligence world, because we have had decades of practice in sorting out relevant and irrelevant text searches. With generative AI responses generally sounding authoritative and sensible, there is I think a willingness to accept them as accurate and correct. The three-toed sloth search result is of course provided with a link just in case it may be thought that it were relevant; it is easy to check.
- Some technical terms are:
- Scraping – the process of feeding data into the generative AI programme to produce data sets. AI terms of service generally include retaining anything that is fed into them. This includes artworks and (often) material subject to copyright.
- Justice Perry in a paper “Emerging technologies and international frameworks”[14] described Large Language Models as follows:
- LLMs have their most accessible face in Chatbots such as ChatGPT and Grok.
- Neural Network – method in AI that teaches computers to process data in a way inspired by the human brain.
- Deep learning – advanced type of machine learning – can draw conclusions from unlabelled data without human intervention. For example – can be taught to identify items it hasn’t seen before in a photo.
an LLM is a complex algorithm which responds to human prompts to generate new text representing the most likely words and word order based on training from massive datasets. Devoid of understanding or concepts of accuracy, the capacity of LLMs to hallucinate – that is, to “make up” information – and to convey it convincingly is well-documented.
- A useful list of terms is found in the glossary of the Victorian Law Reform Commission’s report Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper.[15]
- How does generative AI work in practice? I asked Chat GPT about the members of this Association. It replied:
The Commercial Law Association of Australia (CLA) is a professional association established in 1965 to facilitate the exchange of ideas and information among business professionals, accountants, and lawyers. While the CLA does not publicly disclose a comprehensive list of its individual members, it is known that members are drawn from a wide cross-section of people involved in Australian business, including barristers, solicitors, accountants, financiers, insurers, and others (Become a member of the Commercial Law Association of Australia)
- It then gave a list of “notable individuals associated with the CLA” which was a list of three – the current CEO, the Vice President, and a barrister member. It did not include the President of the Association, which seems like an oversight. So, while the information reads well, it may not be as accurate as it might seem; it does not provide the basis for the information which it blithely says “is known”, and does not refer to any members other than two lawyers and the CEO. So, it may not be inaccurate, but without knowing more about the answer (or having references given for the statements in the answer) we cannot be assured that the information is, indeed, accurate. It gives only one link, which is to the membership page of the CLA. Each of the lawyers has a link to their professional page, but none of the other statements is verified.
- The PowerPoint included with this paper has a range of examples of more frivolous uses of ChatGPT.
The Courts and AI
Guidelines and practice notes for the profession
- The Appendix to this paper is a summary of the procedures (including Practice Notes and Guidelines) currently in force for the Federal Court of Australia, the NSW State Courts (Supreme Court, Land and Environment Court, and District Court), the Victorian Supreme and County Court, and the Queensland Supreme Court. Each of them takes a different approach to the uses of AI.
- The Federal Court has not yet reached a position on how it should regulate or deal with generative AI in proceedings before it (or for that matter by its judges). Given the extremely fast-paced technological advances in this field the Court is seeking to inform itself as to the “responsible use of emergent technologies in a way that fairly and efficiently contributes to the work of the Court”.[16]
- The NSW State Courts (the Supreme Court[17], Land and Environment Court, and District Courts have each adopted in effect the same practice notes) have taken perhaps the strongest stand on the use of generative AI. It proscribes the use of generative AI in the preparation of evidence and expert reports. Generative AI must not be used to generate the contents of affidavits, witness statements, character references, although preparatory steps are acceptable: see paragraph 10. Any such documents must contain a disclosure: paragraph 13. Leave may be sought in “exceptional circumstances” to use generative AI: paragraph 15.
- Generative AI must not be used to draft or prepare contents of an expert report without prior leave of the Court: paragraph 20. Any such leave must be sought to use Gen AI to prepare expert reports in professional negligence claims at the first directions hearing: paragraph 23.
- Victoria[18] on the other hand has issued Guidelines rather than a Practice Note, and urges only “particular caution” when using generative AI tools in the preparation of affidavits, witness statements, or “Other documents created to represent the evidence or opinion of a witness” and urges compliance by experts with the Expert Witness Code of Conduct.
Litigants in person and AI
- As a judge, I can say that this is where I have most often been able to identify the use of AI in litigation. Otherwise inarticulate unrepresented litigants will suddenly cite Briginshaw or random equitable principles. In those cases I have asked them (assuring them that there is “no wrong answer”) whether they are using generative AI. If the answer (as it usually is) is “yes”, I give them some warnings as to the risks, point them to the need to disclose that if asked, and request that they nominate their use of generative AI in any documents filed.
- Queensland has taken the route of issuing a Guideline for the “Responsible Use [of AI] by non-lawyers”,[19] which cautions care in matters of confidentiality, plagiarism, and hallucinations, and what generative AI chatbots cannot do (including “predict the chance of success or otherwise or the outcome of your case”).
Judges and generative AI
- The NSW Guideline to Judges[20] (which is set out in the Appendix in full) provides:
4. Judges in New South Wales should not use Gen AI in the formulation of reasons for judgment or the assessment or analysis of evidence preparatory to the delivery of reasons for judgment.
5. Gen AI should not be used for editing or proofing draft judgments, and no part of a draft judgment should be submitted to a Gen AI program.
(emphasis in original)
- The Victorian Guideline refers to the Australian Institute of Judicial Administration (AIJA) Guide for Courts[21] and notes that currently Victorian judges do not use generative AI. The AIJA guide is very interesting and includes concerns as to Judicial Accountability “where the tools are opaque” and picks up issues relating to effective appeals if the there was a lack of clarity in “the way in which outputs [in a decision] were generated”.
- As the authors of AI-assisted judges? Practical and ethical risks and the need for court-authored guidelines[22] say, “… there is limited discussion in Australian literature regarding the potentially beneficial role that AI could play in judicial processes, decision-making or judicial administration.” They give two examples of positive reviews by judges of generative AI.
- In Snell v United Speciality Insurance Company (11th Cir, No 22-12581, 2024) Newsom J used, and disclosed his use of, two distinct generative AI tools to determine what was the “ordinary meaning” of the word “landscaping” and opined that AI may be useful in the interpretation of legal texts. In the UK, Lord Justice Birss of the Court of Appeal, speaking extra-judicially,[23] said he found AI-generated summaries of areas of law with which he was familiar “jolly useful”. This summation would seem to fit within the caveat in the UK guidance that legal research should generally not be conducted by judges through AI, as “AI tools are a poor way of conducting research to find new information you cannot verify independently. They may be useful as a way to be reminded of material you would recognise as correct.”[24]
Ethics and AI – the problems so far
- There are a number of ethical issues with the way in which open datasets scrape information from a wide range of sources. These include plagiarism, copyright breach, and confidentiality. A document sought to be summarised or analysed by an AI programme can become part of the dataset for that AI tool, raising very real privacy and privilege concerns at the very least. ChatGPT’s (and I am sure others) terms of use include ownership of any questions and documents fed into it. That does not sit well with legal professional obligations, and raises queries as to whether privilege may thereby be waived. The NSW Bar Association recommends that no legally sensitive information is input into a LLM.[25]
- That particular issue can be addressed by the use of closed data-set programs such as Lexis Plus AI and Westlaw Precision (to name the two that I have seen in action). These kind of programs draw only on their database (unlike ChatGPT which of course draws on the entire web) and provide “vaults” for documents to be uploaded and analysed as sought by the user, and then are not kept on the system. The quality of these kind of closed-set systems are only as good as the quality of their underlying database, and so care must be taken to ensure that if a practitioner uses such a tool, it is authoritative and relatively complete.
- Open database sets of course are becoming notorious for “hallucinations” or the making-up of information which seems authoritative but is not. As the authors of Large Legal Fictions – Profiling Legal Hallucinations in Large Language Models[26] say,
LLMs are liable to generate language that is inconsistent with current legal doctrine and case law, and, in the legal field, where adherence to authorities is paramount, unfaithful or imprecise interpretations of the law can lead to nonsensical—or worse, harmful and inaccurate—legal advice or decisions.
- A number of cases involving both litigants in person, and practitioners, demonstrate the pitfalls of unthinking reliance on generative AI.
Litigants in person
- In Luck v Secretary, Services Australia [2025] FCAFC 26, Mrs Luck (now subject to a vexatious proceedings order) cited a false case in support of her application to have one of the Full Court judges recuse herself. At [13] the Full Court cites the applicant’s submissions and quote her as saying that one of the judges had a history of failing to recuse herself in appropriate cases and was indicative of a pattern of “inappropriate judicial assignments reflecting broarder failures in the administration of justice”. That submission was based on a case which did not exist.
- The Full Court took the path of redacting the name of the alleged case saying at [14]
“We apprehend that the reference may be a product of hallucination by a large language model. We have therefore redacted the case name and citation so that the false information is not propagated further by artificial intelligence systems having access to these reasons.”
- This approach of not giving credence to hallucinated cases is an excellent one, but not reflected in all instances. For example, in LJY v Occupational Therapy Board of Australia [2025] QCAT 96, the Deputy President noted that the applicant had cited a case which did not exist and which had a medium neutral citation of a different case, and stated the name and alleged citation of the case. Interestingly, ChatGPT made a second appearance in this case: the Tribunal had a power to inform itself in any way it considered appropriate, and asked ChatGPT to give an overview of the case. It did so, and gave a reference as to where it could be found, and opined that the hallucinated case supported the applicant’s case for a stay on the basis that any suspension would affect both the practitioner and the clients.
Expert evidence
- As noted, the NSW practice note has a blanket proscription with generative AI being used (without leave) in the preparation of expert evidence.
- In Kohls v Elison No 24-cv-03754 (D Minn 10 January 2025) the United States District Court for the District of Minnesota was concerned with a case concerning “deepfakes”. The parties relied on expert evidence about artificial intelligence. One of the experts relied upon by the defendant – the AG of Minnesota - had used generative artificial intelligence to draft his report and had not perused the material that had been produced. It included citations of non-existent academic articles. At p 6 of the decision the court merely says, “The irony.” The Court held that the defendant (perhaps on model litigant principles?) had a “personal, non-delegable responsibility” to validate the truth of the papers filed.
Use of hallucinated cases by legal practitioners
- In Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95, the legal representative for the applicant provided a number of cases which could not be verified by the Court, including quotes from the then AAT which did not exist. The applicant’s legal representative stated that he had used AI to identify Australian cases, but in doing so, it had provided him with non-existent case law. The representative then sent an email to the Judge’s Associate without the consent of, or copying in, of the Minister’s lawyers, enclosing an amended submission without the false case citation.
- The legal representative (who was anonymised in the reasons) was asked to show cause why he should not be referred to the OLSC on the issue of both the false inclusions in the submissions, and the contact with chambers in contravention of the Conduct Rules. The explanation was that:[27]
due to time constraints and his health issues, he decided to use AI, which had been promoted by reputable legal services such as LEAP and Lexis Nexis as being of assistance in legal practices. He accessed the site known as ChatGPT, inserted some words and the site prepared a summary of cases for him. He said the summary read well, so he incorporated the authorities and references into his submissions without checking the details.
- The practitioner in Valu (No 2) was referred to the Legal Services Commissioner. One of the reasons for this was the increased cost, the adjournment caused, and the unnecessary additional work for the Court and for the Minister.
- Interestingly, the FCFCOA Judge referred the practitioner to the NSW Practice Note (which post-dated the submissions), but the practitioner said if he had seen it beforehand, he would not have incorporated the AI-generated material.
- Finally, two UK cases in the matter of Ayinde v London Borough of Haringey which were heard at Kings Bench in the High Court of Justice in the UK. The case at first instance[28] was an application in relation to housing, which was heard with the applicant represented by a very junior barrister. She used a fake case to support her client’s case, but denied using generative AI although the judge had a strong suspicion that she did so. In the wasted costs application the court said:[29]
I do not accept that she photocopied a fake case, put it in a box, tabulated it and then put it into her submissions. The only other explanation that has been provided before me, by Mr Mold, was to point the finger at Ms Forey using Artificial Intelligence. I do not know whether that is true, and I cannot make a finding on it because Ms Forey was not sworn and was not cross examined. However, the finding which I can make and do make is that Ms Forey put a completely fake case in her submissions. That much was admitted. It is such a professional shame. The submission was a good one. The medical evidence was strong. The ground was potentially good. Why put a fake case in?
- The Court found that on the balance of probabilities, it would have been negligent for the barrister to use AI and not check it so that a fake case was included in her pleading and submissions. A wasted costs order was made, £2000 payable by the barrister and the same by her solicitors.
- The transcript of the judgment was sent to the Bar Standards Board and the Solicitors Regulation Authority. The second case[30] was an application for contempt of court against the barrister (although it involved other parties and another separate case involving AI hallucinations as well).
- Dame Victoria Sharp, President of the King’s Bench Division, said (at 8 and 9)
This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.
We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, … the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.
- The Court referred to a case where a member of the Bar in E&W was imprisoned for 12 months for perverting the course of justice for including a fake authority (this was pre-AI in 2008). The judgment has a useful appendix of cases around the world where AI had been used irresponsibly (including Mata and Valu).
- The barrister in that case was referred to the regulator, not only in relation to the possible AI use but in relation to the truthfulness of her answers, and also the question – chillingly for those who supervise young lawyers – “whether those responsible for supervising Ms Forey’s pupillage in chambers complied with the relevant regulatory requirements in respect of her supervision, the way in which work was allocated to her, and her competence to undertake the level of work that she was doing”.[31]
Where to now?
- While I have a somewhat vested interest, I am of the view that the “watch and learn” approach of the Federal Court of Australia has much to commend it. The space is moving so fast that what seems appropriate now may be found to be squelching the best approaches to litigation in the near future. However, it is clear that Courts need to determine the appropriate position to be taken by them – both as to litigants and legal practitioners, and to judges – in relation to the principled use of generative AI and other AI tools in proceedings before them. Drs Ray and Roberts point to the familiar problem of judicial workloads and the productivity gains that can be made in summarising submissions for the purpose of inclusion in judgments.
- The authors of the AIJA review have included in their report very useful lists of questions for Courts to consider when determining the question of AI use in a manner that might impact on the rights and interests of litigants and others.[32]
- Many more well-informed and more technologically competent minds than mine have been turning to the question of “where to now”. The use of AI in litigation is clearly a matter worthy of, and requiring, targeted regulation. I look forward to seeing at which point the various courts arrive.
Appendix 1
A selection of AI Practice Notes and Guidelines in Australia
Federal Court of Australia[33] Notice to the profession – 28 March 2025 | NSW[34] Supreme Court PN SC GEN 23 – Use of Generative Artificial Intelligence (Gen AI) NSWLEC and NSWDC equivalents | Victoria[35] VSC - Guidelines for Litigants – responsible use of AI in litigation Victoria County Court equivalent | Queensland[36] The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers | |
---|---|---|---|---|
Risks and limitations | The Federal Court is keen to ensure that any Guideline or Practice Note appropriately balances the interests of the administration of justice with the responsible use of emergent technologies in a way that fairly and efficiently contributes to the work of the Court. | Hallucinations; misinformation; biased or inaccurate output; search requests may be automatically added to the database; confidentiality / privacy / LPP; copyright | Be aware of how the tools work, privacy and confidentiality may not be guaranteed / secure: par 1-2 Use of AI programs must not indirectly mislead other participants in the proceedings (incl the Court). Use of AI subject to obligation of candour to the Court and those of the CPA: par 4 Generative AI more likely to produce inaccurate information, it does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court: par 8 Check that info is not out of date; incomplete; inaccurate or incorrect; inapplicable to the jurisdiction; biased: par 8 | Gen AI are not actually intelligent in the ordinary human sense and are unable to reliably answer questions that require a nuanced understanding of language content: par 1 Gen AI chatbots predict the most likely combination of words, not necessarily the most correct or accurate answer. Limited training on Australian law and currency Gen AI responses may contain incorrect, opinionated, misleading or biased statements presented as fact Confidentiality, suppression, privacy: par 2 Ethical issues, including biases in training data, copyright and plagiarism, acknowledgment of sources: par 4 |
Disclosure | In the meantime, the Court expects that if legal practitioners and litigants conducting their own proceedings make use of Generative Artificial Intelligence, they do so in a responsible way consistent with their existing obligations to the Court and to other parties, and that they make disclosure of such use if required to do so by a Judge or Registrar of the Court. | Mandatory for use in preparatory steps for evidence and expert reports | No mandatory disclosure, but parties should disclose the use of AI to each other and the Court if necessary (e.g. where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents): par 3 Self-represented litigants and witnesses are encouraged to identify the use of generative AI by including a statement in the document to be filed: par 5 | N/A |
Evidence and expert reports | Parties will continue to be responsible for material that is tendered to the Court | Must not be used to generate the contents of affidavits, witness statements, character references etc. Preparatory steps are OK: par 10. These documents must contain a disclosure: par 13. Leave may be sought in “exceptional circumstances”: par 15 Must not be used to draft or prepare contents of an expert report without prior leave of the Court: par 20 Leave must be sought to use Gen AI to prepare expert reports in professional negligence claims at the first directions: par 23 | Particular caution if using generative AI to prepare affidavits / evidence and expert reports; the witness / expert should ensure documents are finalised in a manner that reflects that person’s own knowledge and words: par 10 | N/A |
Confidentiality and LPP | Parties will continue to be responsible for material that is tendered to the Court | Information subject to NPP/suppression, Harman undertaking, subpoena material etc must not be entered into any Gen AI program unless satisfied that the information will remain within the controlled environment of the technological platform and is confidential, used only in connection with the proceeding, and not used to train any program: par 9A | Be aware of how the tools work, privacy and confidentiality may not be guaranteed / secure: par 1-2 | Do not enter any private, confidential, suppressed or legally privileged information into a Generative AI chatbot: par 2 |
Permitted uses | Parties will continue to be responsible for material that is tendered to the Court | Generate chronologies, indexes and witness lists; preparation of briefs or draft Crown Case Statements; summarise or review documents and transcripts; prepare written submissions or summaries of arguments: par 9B Where Gen AI has been used in written submissions or summaries, the author must verify all citations and authorities: par 16 | AI that can search and identify relevant matters in a closed category of information is helpful, e.g. Technology Assisted Review which uses machine learning for large scale doc review: par 6 Specialised legally focused AI tools more useful and reliable: par 7 | They may help you by identifying and explaining laws and legal principles that might be relevant to your situation; prepare basic legal documents, e.g. organise the facts into a clearer structure or suggest suitable headings; help with formatting and suggestions on grammar, tone, vocabulary and writing style |
Guidelines for New South Wales judges in respect of use of generative AI [37]
- These Guidelines apply to all courts in New South Wales and have been developed after a process of consultation with Heads of Jurisdiction and review of recently published guidelines of other common law courts.
- Generative AI (Gen AI) is a form of artificial intelligence that is capable of creating new content, including text, images or sounds, based on patterns and data acquired from a body of training material. That training material may include information obtained from “scraping” publicly and privately available text sources to produce large language models.
- Gen AI may take the form of generic large language model programs such as Chat-GPT, Claude, Grok, Llama, Google Bard, Copilot, AI Media or Read AI or more bespoke programs specifically directed to lawyers such as Lexis Advance AI, ChatGPT for Law, Westlaw Precision, AI Lawyer, Luminance and CoCounsel Core. Such programs may use “chatbots” and prompt requests and refined requests from the users of such programs.
- Judges in New South Wales should not use Gen AI in the formulation of reasons for judgment or the assessment or analysis of evidence preparatory to the delivery of reasons for judgment.
- Gen AI should not be used for editing or proofing draft judgments, and no part of a draft judgment should be submitted to a Gen AI program.
- If using Gen AI for secondary legal research purposes or any other purpose, judges should familiarise themselves with the limits and shortcomings of large language model Gen AI, including:
- the scope for “hallucinations”, that is, the generation of inaccurate, fictitious, false or non-existent citations and fabricated legislative, case or other secondary references;
- the dependence of large language model Gen AI programs on the quality and reach of underlying data sets, including the possibility that underlying database(s) may include misinformation or selective or incomplete data or data that is not up to date or relevant in New South Wales and Australia;
- the scope for biased or inaccurate output because of the nature or limitations of the underlying data sets;
- the fact that any search requests or interactions or prompts with a Gen AI chatbot may, unless disabled, be automatically added to the large language model database, remembered and used to respond to queries from other users;
- the potential inability or lack of adequate safeguards to preserve confidentiality or privacy of information or otherwise sensitive material submitted to a public AI chatbot;
- the fact that data contained in a data set upon which a Gen AI program draws may have been obtained in breach of copyright; and
- the risk of inadvertently providing, through requested “permissions”, access to information on a judge’s or judicial staff member’s devices such as smartphones, ipad or other tablets.
- The product of all Gen AI generated research, even if apparently polished and convincing, should be closely and carefully scrutinised and verified for accuracy, completeness, currency and suitability before making any use of it. Gen AI research should not be used as a substitute for personal research by traditional methods.
- Judges should require that their associates, tipstaves or researchers disclose to the judge if and when they are using Gen AI for research purposes or any other related purpose, and associates, tipstaves or researchers should be separately required to verify any such output for accuracy, completeness, currency and suitability.
- Judges may require litigants (including litigants in person) and legal representatives including counsel to disclose any use of Gen AI in respect of written submissions or other documents placed before the Court, and may also require an assurance that any such documents have been verified for accuracy, including an identification of the process of verification followed including, where applicable, for the purpose of ensuring compliance with Practice Note SC Gen 23.
- Judges should be astute to identify any undisclosed use of Gen AI in court documents by litigants, including litigants in person, and legal practitioners.
- ‘Red flags’ associated with content generated by Gen AI, and which may indicate the unsafe, inappropriate or improper use of Gen AI, and hence the need to make further inquiries with practitioners or litigants in person, include:
- inaccurate or non-existent case or legislative citations;
- incorrect, inaccurate, out of date or incomplete analysis and application of the law in relation to a legal proposition or set of facts;
- case law references that are inapplicable or unsuited to the jurisdiction, both in terms of substantive and procedural law;
- case law references that are out of date and do not take account of relevant developments in the law;
- submissions that diverge from your general understanding of the applicable law or which contain obvious substantive errors;
- the use of non-specific, repetitive language; and
- use of language, expressions or spelling more closely associated with other jurisdictions.
- Due to the rapidly evolving nature of Gen AI technology, these guidelines will be reviewed on a regular basis.
The Hon. A S Bell
Chief Justice of New South Wales
21 November 2024
[1] A paper delivered at the NSW State Library in the 2025 Judges Series hosted by the Commercial Law Association of Australia. My thanks (in alphabetical order) to my Associates, Joshua Herschderfer and Tina Wu, for their research and assistance with this paper.
[2] The Hon. Justice Jane Needham was appointed to the Federal Court of Australia on 5 July 2024 and prior to that had a career at the NSW Bar spanning some 35 years. She did a subject in History and Philosophy of Science taught by A/Prof Peter Slezak dealing with artificial intelligence in, fittingly, 1984 as part of her BA. She lives and works on Gadigal land.
[3] Jonathan Swift DD, Gulliver’s Travels into Several Remote Nations of the World, Symons, Dublin, 1726.
[4] Chapter V.
[5] George Boole, An Investigation of the Laws of Thought: on Which are Founded the Mathematical Theories of Logic and Probabilities, Walton & Maberley, 1854
[6] The man who replaced the mind with a machine. The story of Leonardo Torres https://medium.com/serverspace-cloud/the-man-who-replaced-the-mind-with-a-machine-the-story-of-leonardo-torres-506785ec661d
[7] A Proposal For the Dartmouth Summer Research Project on Artificial Intelligence https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
[8] Par 7
[9] Herbert Simon, a US academic at Carnegie Mellon University and founder of the Carnegie Mellon School of Computer Science, said it would be able to do so by the 1980s: The New Science of Management Decision, Harper & Rowe, 1960
[10] Herbert Dreyfus, Alchemy and AI, Rand Corporation, 1965, said that the mind was not like a computer and there were limits beyond which AI could not progress.
[11] Peter Slezak, Artificial Intelligence, Gödlian Arguments Against, 2006 Encyclopaedia of Cognitive Science
[13] Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) https://apo.org.au/node/327400
[14] Justice Melissa Perry,paper presented at the Australian law Libarian’s Conference 9 August 2024 https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20240809
[15] Glossary - Victorian Law Reform Commission https://www.lawreform.vic.gov.au/publication/artificial-intelligence-in-victorias-courts-and-tribunals-consultation-paper/glossary/ October 2024
[16] Mortimer CJ, Notice to the Profession, 29 April 2025
[17] I will use SC PN Gen 23 as the example in this paper
[18] Guidelines for litigants: responsible use of artificial intelligence in litigation | The Supreme Court of Victoria https://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation – the County Court has the same Guideline
[19] Using Generative AI | Queensland Courts https://www.courts.qld.gov.au/going-to-court/using-generative-ai
[21] AI Decision-Making and the Courts: a guide for Judges, Tribunal Members and Court Administrators - Australasian Institute of Judicial Administration https://aija.org.au/publications/ai-decision-making-and-the-courts-a-guide-for-judges-tribunal-members-and-court-administrators/
[22] Andrew Ray, Visiting Fellow, ANU Law School, and Associate Professor Heather Roberts ANU Law School, Research Paper; accepted for publication in the Australian Law Journal
[23] Remarks to the Law Society of the UK conference, cited by Ray and Roberts
[24] Artificial Intelligence (AI) – Guidance for Judicial Office Holders, Courts and Tribunals Judiciary, 14 April 2025
[25] NSW Bar Association – GPT and AI Language Models – Guidelines (21 March 2024).
[26] Dahl, Magesh, Suzgun and Ho, Oxford Journal of Legal Analysis, 2024, 16, 64-93
[27] Valu (No 2) at [22]
[28] [2025] EWHC 1040 (Admin) 3 April 2025
[29] At [58]
[30] [2025] EWHC 1383 (Admin)
[31] Ayinde (No 2) at [70]
[32] At [33]