BEIJING, Oct. 31 -- In a recent commercial dispute hearing, Judge Zheng Jizhe of the Beijing Tongzhou District People's Court came across two suspicious case citations submitted by the plaintiff's lawyer.
At first glance, the references appeared legitimate -- one allegedly from the Supreme People's Court, and another labeled as (2022) Hu 01 Min Zhong No. 12345 from the Shanghai No. 1 Intermediate People's Court. Both seemed highly relevant, with facts, legal arguments, and reasoning closely mirroring the case at hand.
However, sensing something was amiss, Zheng verified the citations and found that the cases bore no resemblance to the lawyer's descriptions.
When questioned in court, the lawyer admitted that the "reference cases" were generated by an AI model. He had input key details from the current case into a large language model, received AI-generated sample cases, and -- without verification -- copied them directly into his written submission.
The court ultimately dismissed the AI-generated references and issued a formal warning in its written judgment.
Judge Zheng's ruling explicitly criticized the lawyer's behavior and urged legal practitioners to verify the authenticity and accuracy of any cases or legal provisions submitted to the court.
The judgment stressed that fabricated information produced by AI must not be allowed to disrupt judicial order.
Chen Hangping, a law professor at Tsinghua University, described the fabricated reference cases as typical examples of AI "hallucination" -- situations in which AI models produce false or misleading information that appears convincing in a seemingly correct format and context.
He said AI has undoubtedly brought various conveniences to legal work. But at the same time, legal professionals should be aware of the boundaries of using AI.
"Sometimes we use AI tools to search for judicial cases, but the process can be tricky," said Zhou Junwu, a senior partner at Beijing Jincheng Tongda & Neal Law Firm.
He said some of the cases may look authentic, but if the case number is something like "1234" -- a simple sequence -- or "000," which has an obvious pattern, it's probably fabricated.
"That's why you can't completely trust the results produced by AI," Zhou said.
According to a Southern Metropolis Daily report, a judge in a local court in east China recently also came across a lawsuit that appeared to have been written by AI, complete with fabricated white papers and incorrect case numbers.
These cases highlight a growing concern: AI-generated content is flooding the internet. According to U.S. media reports, a study by AI startup Graphite analyzing about 65,000 URLs posted online between 2020 and 2025 found that AI-generated new articles surged dramatically after the launch of ChatGPT in 2023 and at one point, briefly outnumbered those written by humans.
The AI-generated content has also found its way into courtrooms. "Globally, AI-hallucinated court filings by lawyers are not uncommon," said Chen, noting that such incidents have prompted countries including the United States and Singapore to adopt new rules ensuring the authenticity of AI-assisted materials submitted to courts.
Earlier this week, China's top legislature approved an amendment to the Cybersecurity Law, with one of the key changes focusing on improving oversight, ethical standards, and risk assessment for artificial intelligence.
Chen urged authorities to strengthen administrative regulations and judicial interpretations further to prevent AI misuse from undermining judicial order.
Despite the risks of AI-generated false information, the judicial sector continues to explore ways to use AI to assist in its work.
According to Zhou, who has lately attended several recent AI forums, some other countries have already experimented with AI-assisted criminal sentencing.
AI technology has undoubtedly made legal work more efficient, Chen said, adding that legal professionals should also improve their ability to use such tools wisely.