innovatewebbanner2

INNOVATE Magazine

INNOVATE is the online magazine by and for AIPLA members from IP law students all the way through retired practitioners. Designed as an online publication, INNOVATE features magazine-like articles on a wide variety of topics in IP law.

The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the views or positions of AIPLA.

Articles

 

Is AI “Hallucination” a Proper Term in Patents?

Thinh Nguyen

 

Abstract

The term “hallucination” in a generative AI context has two contradictory attributes: the first attribute refers to unintended and unfavorable results, the second to intended and favorable results. When used in a patent claim, this contradictory term fails the “reasonable certainty” standard for definiteness. A new term is needed to replace “hallucination” when referring to “unintended and unfavorable results” caused by improper training, prompting, or other errors in a machine learning architecture.


Patent law is about accuracy, exactness, and clarity. This basic tenet is captured in two sections of the 35 U.S.C. code: 112(a) and 112(b). Section 112(a) states, “The specification shall contain a written description of the invention . . . in such full, clear, concise, and exact terms . . .” Section 112(b) states, “[T]he claims must particularly point out and distinctly define the metes and bounds of the subject matter to be protected by the patent grant.”

The standard for judging whether terms in patent claims are indefinite, vague, or ambiguous in violation of section 112(b) is well established. In Nautilus, Inc. v. Biosig Instruments, Inc., 572 U.S. 898 (2014), the Supreme Court established the “reasonable certainty” standard, which requires consideration of a totality of factors, including the specification, the prosecution history, the perspective of the person having ordinary skill in the art (PHOSITA), and other intrinsic evidence. Following Nautilus, several Federal Circuit cases provide further guidance on applying the reasonable certainty standard for terms that demonstrate a lack of clarity in a manner analogous to the term “hallucination” having multiple meanings.

In Interval Licensing LLC v. AOL, Inc., 766 F.3d 1364 (Fed. Cir. 2014), the court found that the claim limitation in a display system failed the reasonable certainty standard because the phrase “unobtrusive manner” could have temporal dimensions (the screen saver embodiment) as well as spatial dimensions (the wallpaper embodiment) and the specification and the prosecution history failed to provide the necessary clarity. In Teva Pharma. USA v. Sandoz, Inc., 789 F.3d 1335 (Fed. Cir. 2015)[1], the claim at issue recites “molecular weight of about 5 to 9 kilodaltons.” There were three different types of “molecular weight.” Each of these types may provide a different result, potentially other than “of about 5 to 9 kilodaltons.” The claims and the specification did not provide an explicit definition, and the prosecution history contained inconsistent statements. Applying the reasonable certainty standard, the court ruled that the claims were indefinite. Following Teva, in Dow Chemical Co. v. Nova Chemicals Corp., 803 F.3d 620 (Fed. Cir. 2015), the court found that the claim limitation “slope of strain hardening coefficient greater than or equal to 1.3” rendered the claims indefinite because there were four different methods that could produce results other than the “greater than or equal to 1.3” result and the specification did not provide guidance on which method to use.

These Federal Circuit cases provide further clarity to the “reasonable certainty” standard from Nautilus. In essence, the “reasonable certainty” standard excludes inconsistencies or contradictions in claim terms. Though this standard can be met with careful drafting by clearly and unambiguously defining a term and using the term consistently,[2] sometimes it is difficult to maintain because the terminology recited in the claims has inherent inconsistencies, contradictions, or has not been universally agreed upon by the scientific community. This is the case of the term “hallucination” in artificial intelligence (AI).

AI hallucinations have recently received increasing attention in various fields, including health care, media, finance, scientific research, law enforcement, and law. In the legal field, AI hallucinations have become increasingly prevalent in connection with fabricated quotations or non-existent court citations in court filings.[3] The term “hallucination” comes from mental health and psychological studies. In psychosis, hallucinations “involve experiencing sensations that have no external source.”[4] For example, auditory hallucinations involve hearing voices that nobody can hear. “Hallucinations are sensory experiences – meaning they feel very real to the person experiencing them, even if no one else can perceive them.”[5] Accordingly, hallucinations are characterized by two attributes: (1) incorrect perception and (2) absence of a sensing receptor related to the underlying perception.[6] Based on this regular meaning of hallucination, engineers and scientists in fields other than psychology or mental health borrow the term to describe their scientific or engineering techniques or designs. Unfortunately, this borrowing, especially in the field of AI, results in misnaming that may cause confusion or misunderstanding and may lead to incorrect interpretation or uncertainty. Currently, the term “hallucination” has two contradictory meanings. The first meaning is more popular to the public. The second meaning is mainly known to a small group of scientists, engineers, and researchers. Since members of this small group are also members of the public, they, as the PHOSITA, face two contradictory meanings.

For the first meaning, various definitions exist. OpenAI, the developer of ChatGPT, defines hallucinations as “plausible but false statements generated by language models.”[7] Other definitions include: (1) patterns or objects in a large language model (LLM) “that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate,”[8] (2) “an erroneous output produced by an LLM,”[9] and (3) “the generated content that is nonsensical or unfaithful to the provided source content.”[10] There are proposals to use terms other than hallucination to describe these types of behaviors or phenomena in the AI context. These proposed terms include confabulation,[11] fabrication, and delusion. Sun et al. reviewed classifications of distorted information, including various types of hallucinations and errors.[12] In particular, Sun et al. distinguish AI hallucinations, disinformation, and misinformation.[13]

Despite various definitions, at least two aspects are common for these types of hallucinations: they refer to undesirable responses because of errors in training—either incomplete data training or improper post-training processing, such as aggressive prompting. In these instances, hallucinations occur occasionally and are unexpected or unintended. Accordingly, the first meaning of the term “hallucination” refers to two basic attributes of the responses or results: unintended and undesirable.

In the second meaning, the term “hallucination” has been used to describe totally opposite or even contradictory behaviors in a machine learning context. These so-called hallucinations are beneficial, positive, and desirable. They are outcomes of carefully designed structures or algorithms, which produce a mixture of results, including the beneficial hallucinations. The second meaning of the term “hallucination,” therefore, is opposite to the first meaning.

In the English language, a term with two opposite meanings is called a contronym or contranym, such as sanction, oversight, and variety. Use of contronyms in published materials or conversations typically presents no problem because the meaning is often clarified based on the context. However, when the context is unclear or not available, contronyms often cause confusion or misunderstanding. In patent law, the contradictory meanings may lead to disastrous results because terms in patent claims may dictate the outcome of litigations or challenges regarding validity or infringement.

One early use of the term “hallucination” in the technological areas of AI or computer science is the PhD dissertation by John Irving Tait submitted to the University of Cambridge in 1982.[14] In his dissertation, Tait describes Scrabble, a computer program that summarizes English texts. Hallucination, according to Tait, is the “misassignment of a text to a script and the subsequent misanalysis of the text on the basis of that assignment.”[15] This observation is based on his analysis of an earlier text program called FRUMP. Tait explains, “I have called this problem the hallucination of matches because in effect what FRUMP has done is to see in the incoming text a text which fits its expectations, regardless of what the input text actually says.”[16] In other words, the initial misassignment of a text dictates the analysis of subsequent texts and this misassignment forms a global or pre-conceived interpretation that ignores any presence of unusual aspects of the text. This seems to justify part of what constitutes the phenomenon of hallucination in psychosis: the ignoring of the presence of facts or reality.

In 1999, Simon Baker and Takeo Kanade published the results of their work named “Hallucinating Faces.”[17] Their work was to enhance the resolution of images of human faces. The technique was based on a learning process of a resolution enhancement function based on a training set of high-resolution images. Baker and Kanade briefly stated that the additional pixels generated to increase the image resolution were hallucinated.[18] According to their own admission, “the learning algorithm at the heart of our approach is just a simple nearest neighbor algorithm.”[19] In other words, their algorithm creates new pixels using a complex form of interpolation within a local region. The creation is based on a carefully designed procedure from a training set.

In 2016, Fawzi et al. used a maximization algorithm to reconstruct missing pixels based on the prior knowledge of a pretrained neural network classifier that had been trained on training images.[20] Hallucination was the result of “filling the missing part” using a pretrained neural network. Fawzi et al.’s definition of the term “hallucination” is similar to the definition by Baker and Kanade: the intentional creation of new information to achieve a favorable result by taking advantage of the ability of a neural network to learn to generate the new information.

Using a similar approach in a different application area, several researchers, including the 2024 Nobel laureate David Baker, developed techniques in protein structure prediction. Inspired by Google’s DeepDream—a structure of networks trained to recognize faces that generate images that “do not represent any actual face, but what the neural network views as an ideal face”—they developed a network trained to predict structures from sequences to generate “brand-new ‘ideal’ protein sequences and structures.”[21] They used the term “hallucinations” when referring to the Google’s DeepDream article[22] by Mordvintsev et al. even though Mordvintsev et al. did not use the term “hallucinations” in the DeepDream article. Instead, Mordvintsev et al. named their technique “inceptionism.” Inceptionism prepared the network to generate new data based on what it learned from training images. Subsequently, Suzuki et al. developed what they called “Hallucination Machine”[23] based in part on the deep convolutional neural networks (DCNNs) and the Deep Dream visualization algorithm. Suzuki et al. explicitly explained the reason for using the term “hallucination”: “Here, we address this challenge by combining virtual reality and machine learning to isolate and simulate one specific aspect of psychedelic phenomenology: visual hallucinations.”[24] Accordingly, though the Baker’s team did not directly borrow the term “hallucination” from the DeepDream article, the term “hallucination” used in protein structure prediction refers to an outgrowth of the DeepDream technique; one that exploits a neural network’s ability to generate responses based on a set of training data to obtain a stable protein structure after an iterative procedure with appropriate adjustments.

In the above cases, from Tait’s early use of the term, “hallucination” has evolved to acquire a different meaning. Hallucinations are favorable results obtained from a carefully designed procedure that imposes certain evaluation criteria. This approach is similar to many other scientific approaches that impose constraints on an evolving process to obtain desirable results without resorting to the hallucination naming.

In summary, there are currently at least two opposite meanings for the term “hallucination” used in the context of AI or in the narrow subfield of machine learning. The first meaning is the unintended and incorrect or undesirable results caused by improper training, either training data or training algorithm, or improper prompting. The second is the intended and correct or desirable results obtained thanks to a carefully designed training and post-training process.

In patent law where clarity and exactness are the two basic requirements, it is unacceptable to use a term with such a double meaning. Under the “reasonable certainty” standard, the term “hallucination” cannot be used in claims for AI inventions. It should not be used in the specification either because the specification must conform to the requirements of 35 USC 112(a) of “full, clear, concise, and exact terms.”

If “hallucination” is improper, what term should be used?

For the first meaning, we can resort to the use of an acronym to create a new term, so that the complete meaning can be conveyed. Acronyms used as common nouns—such as scuba, laser, radar, taser, and snafu—are popular. We can apply the same approach for naming terms involving the mis-named hallucination. For the first meaning, I propose the acronym UMBIT (Unintended Misinformation By Improper Training) or UMBITOP (Unintended Misinformation By Improper Training Or Prompting). Under this meaning, “misinformation” covers all types of incorrect information, whether it is intentional or not. UMBIT or UMBITOP may not fully describe the first meaning, but it conveys the essence of the term, is easy to pronounce, and has not yet appeared in dictionaries.[25] It is an uncountable noun that uses a singular verb because it is misinformation. It can also be used as a verb (e.g., “the system umbits/umbitops . . .”). Variations of the term UMBIT or UMBITOP are possible. As for the second meaning, a single new term may not be appropriate because of the individual specific natures of the procedures, algorithms, or techniques. Special new terms may be developed on a case-by-case basis. Naming a technique or a phenomenon in one field by borrowing a term from another field (e.g., computer virus) helps explaining the technique or phenomenon and may act as a metaphor that could create a strong impression on people’s mind. However, one should not pursue terminology embellishment without keeping the analogy accurate.  Patent applications may deal with different types of “hallucination” described above. But regardless of what type of “hallucination,” a patent application should use terms that accurately describe the invention.  

 


[1] On remand from the U.S. Supreme Court, Teva Pharma. USA v. Sandoz, Inc., 574 U.S. 318 (2015).

[2] See Ecolab v. FMC, 569 F.3d 1335 (Fed. Cir. 2009) (“An inventor may act as his own lexicographer to define a patent term.”)

[3] See, for example, Zach Warren, GenAI Hallucinations Are Still Pervasive in Legal Filings, but Better Lawyering is the Cure, Thomson Reuters, (Aug. 18, 2025), https://www.thomsonreuters.com/en-us/posts/technology/genai-hallucinations/.

[4] Leigh Shane, Delusions vs Hallucinations in Psychosis: Examples & Differences, AMFM, (Apr. 1, 2025), https://amfmtreatment.com/blog/delusions-vs-hallucinations-in-psychosis-examples-differences/#:~:text=Delusions%20are%20false%20beliefs%20that,things%20that%20aren't%20there.

[5] Id.

[6] Hearing strange voices despite having a perfect auditory ability may not be called hallucination. It may be caused by something else.

[7] OpenAI, Why Language Models Hallucinate, (Sep. 5, 2025), https://openai.com/index/why-language-models-hallucinate/

[8] IBM, What Are AI Hallucinations? (Sep. 1, 2023), https://www.ibm.com/think/topics/ai-hallucinations.

[9] Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli, Hallucination is Inevitable: An Innate Limitation of Large Language Models, (2025), https://doi.org/10.48550/arXiv.2401.11817.

[10] Ziwei Ji et. al., Survey of Hallucination in Natural language Generation, ACM Computing Survey (2023): 248-3, https://doi.org/10.1145/3571730.

[11] Benj Edwards, Why ChatGPT and Bing Chat are so Good at Making Things up, (Apr. 6, 2023), https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/.

[12] Sun, Y. et al., AI Hallucination: Towards a Comprehensive Classification of Distorted Information in Artificial Intelligence-Generated Content, Humanit Soc Sci Commun 11, 1278 (2024). https://doi.org/10.1057/s41599-024-03811-x

[13]Id., 3.

[14] John Irving Tait, Automatic Summarizing of English Texts, (PhD Thesis, University of Cambridge, 1982), https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-47.pdf.

[15] Id. at 29.

[16] Id.

[17] Simon Baker and Takeo Kanade, Hallucinating Faces Tech. (Report, CMU-RI-TR-99-3, Robotics Institute, Carnegie Mellon University, 1999) https://www.ri.cmu.edu/pub_files/pub2/baker_simon_1999_1/baker_simon_1999_1.pdf.

[18] Id.

[19] Id. at 45.

[20] A. Fawzi et al., Image Inpainting Through Neural Networks Hallucinations, 2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Bordeaux, France, (2016), pp. 1-5, doi: 10.1109/IVMSPW.2016.7528221.

[21] Anishchenko et al., De novo Protein Design by Deep Network Hallucination, bioRxiv, 2, (Jul. 23, 2020), https://www.biorxiv.org/content/10.1101/2020.07.22.211482v1.full.pdf.

[22] Alexander Mordvintsev et al., “Inceptionism: Going Deeper into Neural Networks,” Google Research, (Jun. 18, 2015), https://research.google/blog/inceptionism-going-deeper-into-neural-networks/.

[23] Suzuki, K. et al., “A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology,” Sci Rep 7, 15982 (2017). https://doi.org/10.1038/s41598-017-16316-2.

[24] Id.

[25] The closest spelling is umbite (umbit in German) which refers to a potassium zirconosilicate mineral found in northern Russia.


Thinh Nguyen has thirty years of experience in patent prosecution as a registered patent attorney. He holds a JD degree and a PhD degree in electrical engineering. His current practice focuses on patent prosecution involving a wide range of technologies. 

 

 

Please login to see the content.
Register now

Innovate Volume 19 Timeline

submit articles to innovate@aipla.org

Submission Window Open

August 1

Submission Deadline

October 31

Publication Date

December 12, 2025


 

About

Publishing an article to INNOVATE is a great way for AIPLA members to build their brand by increasing recognition among peers and setting themselves apart as thought leaders in the IP industry.

Any current AIPLA member in good standing may submit an article for consideration in INNOVATE throughout the year. IP law students are especially encouraged to submit articles for publication.

Articles submitted to innovate@aipla.org are reviewed by an ad-hoc sub-committee of volunteers from AIPLA's Fellows Committee, and other AIPLA peers. 

Don’t miss your chance to be published with AIPLA’s INNOVATE! Email your article submission to innovate@aipla.org to be considered for the next edition.

For more information please review the Guidelines for INNOVATE Article Submission and the INNOVATE Author Acknowledgement Letter for guidelines and terms of article submission and publication.