
As artificial intelligence systems become more sophisticated, applicants face increasing scrutiny over how AI inventions are claimed. A central challenge in U.S. practice is determining how to balance functional claim language with the level of structural or algorithmic detail required to satisfy disclosure and definiteness requirements. The issue is not new, but AI model development highlights the tension: many innovations are best described by what the model achieves rather than by how it is constructed. Examiners, however, must evaluate whether the application provides enough concrete information for a skilled person to understand, implement, and distinguish the claimed invention.
Functional claim language describes an invention by its results or capabilities. For AI models, this often appears in formulations such as “a model configured to classify,” “a neural network trained to detect,” or “a system for predicting.” These formulations are appealing because they track how practitioners talk about machine-learning systems. They also allow applicants to avoid locking themselves into a specific implementation that may change as models evolve.
Under USPTO practice, functional language is permissible, but it triggers familiar constraints. Examiners may issue rejections when the claims appear to cover all ways of achieving a functional result rather than the specific approach(es) described in the specification. Questions arise around whether the specification sufficiently supports the breadth of the functional expression and whether undue experimentation would be required to implement it. When the claim invokes a “means for” construction (sometimes without even using the specific “means for” language), the analysis becomes more rigid, and the corresponding structures must be clearly linked to the claimed function.
For AI inventions, this raises a practical issue: the “structure” of a machine-learning model is often defined not only by its architecture, but by its training data, hyperparameters, optimisation processes, loss functions, and iterative adjustments. These elements may not be readily captured in functional language alone, which increases the risk that the claim will be judged overly broad or indefinite.
The question then becomes what level of structural or algorithmic detail applicants must disclose. Traditional software cases require an algorithm or step-by-step procedure when functional language is used. With AI, the equivalent may include the model architecture (layers, nodes, connections, modules), training procedures and optimisation strategies, the nature and role of the datasets used, parameter-selection or update rules or the technical rationale for achieving the claimed effect.
Not every application needs to disclose all of these elements, but the more the claim relies on functional outcomes, the more an examiner may require details showing how the model actually achieves those outcomes.
Enablement, written description, and definiteness all converge in this analysis. An application that merely states that a model “learns to perform task X” without any indication of how it is trained or what characteristics allow it to perform X may be deemed insufficient. Conversely, a detailed description of the training approach and model behaviour can support more ambitious claim language.
AI presents unique challenges that complicate this balance:
As a result, applicants are increasingly experimenting with hybrid claiming: defining certain architectural elements structurally, while capturing performance aspects functionally, supported by detailed descriptions of training regimes or optimisation processes.
While the USPTO and the EPO share the need for sufficient disclosure, the EPO places particularly strong emphasis on technical character. Functional language is permitted, but the claims must clearly tie the stated function to a technical effect achieved through identifiable technical features. The EPO often expects explicit structural or methodological details showing how the model achieves the technical contribution. Merely stating that a neural network is “trained to” perform a task may be insufficient unless the technical nature of the training and the technical purpose of the output are clearly demonstrated.
In practice, the EPO is less tolerant of claims that define AI models purely by functional outcomes. Applicants often need to disclose concrete elements such as training data categories, specific input transformations, or structural features that directly contribute to the technical effect. This can lead to narrower, more implementation-focused claims than those pursued in the U.S., where broader functional formulations may still be viable if supported by sufficient disclosure.
When preparing claims for AI inventions in the United States, applicants benefit from thinking carefully about how the invention is presented and supported. A key question is whether the technology can be meaningfully described in structural terms without locking the organisation into an implementation that may evolve. At the same time, any functional language used in the claims must rest on concrete technical support in the specification, which often requires thoughtful disclosure of model architecture, training methodology, or optimisation behaviour. Applicants also need to consider how much of this information they are willing to make public, given that some aspects of AI development may hold significant competitive value as trade secrets. Another practical factor is the growing complexity of the prior-art landscape: broad function-oriented claims may attract scrutiny on both novelty and enablement grounds. Taken together, these considerations call for a deliberate drafting strategy that balances flexibility with specificity, and long-term enforceability with the realities of rapidly changing AI development cycles.
For cross-Atlantic portfolios, early coordination is useful: claims drafted with both U.S. and EPO standards in mind tend to avoid late-stage narrowing and minimise inconsistencies in prosecution strategy.
Claiming AI inventions requires a careful equilibrium between the flexibility of functional language and the rigor of structural disclosure. The USPTO allows functional formulations but expects meaningful technical support in the specification. As AI technologies mature, applicants should anticipate closer scrutiny of how claimed functions are tied to specific structures or training approaches. A thoughtful drafting strategy – one that articulates the invention’s technical mechanisms while preserving room for technological evolution – remains essential for securing durable, enforceable protection.