October 7, 2021
by: Robert W. Downs, Ph.D.
My career includes work in research in artificial intelligence, patent examination at the U.S. Patent and Trademark Office, and patent application preparation and prosecution, all over a period of about 40 years. Over that period, I have completed three Masters Degree programs in computer-related fields. As a researcher, I have experienced the difficult challenge of not just writing computer programs in programming languages such as Fortran and Lisp, but getting the programs to work and actually produce desired results. As a patent examiner, I have experienced the difficult challenge of searching for prior art based on a high level description of an invention, as well as judging whether a computer-related invention is patentable under Section 101. As a patent agent, I have experienced a difficult challenge of preparing patent application for computer-related inventions, while keeping in mind a broad range of potential prior art and the high possibility of receiving a rejection under Section 101.
Each experience brings a different perspective on the issue of patentability of computer-related inventions. As a researcher, the importance of publication and documentation was stressed, but copyright and patenting was necessary for protection from a legal standpoint. The question of patenting is especially of importance in group projects and/or corporate projects, where it is important to define ownership. Ownership can even be of concern for individuals in cases where they use software developed by others. However, at the same time, computer programming typically involves principles of software reuse and abstraction away from details. Accepting the fact that much software programming is performed at a high level often using known libraries, a final product may require protection as an incentive for the extensive programming effort. As a concrete example, there is a need for protection from one or more programmers taking all or substantial portions of program code and claiming the final product as their own. As a patent examiner, I was faced with deciding whether the patent application described an invention that one of ordinary skill could make and use without undue experimentation, or whether the description is a broad concept, an abstract idea. My perspective as a patent examiner was based on my own personal experience as a researcher in the computer art. As a patent agent, I am now faced with preparing patent applications for computer-related inventions, where in some cases the invention is so complex that a detailed description that contains too much information only muddies the water as to what the inventive concept might be. In most cases, the invention is multidisciplinary. The inventor(s) may be experts in a domain, but not themselves computer programmers. The computer programmers that write the computer programs serve mainly to make the invention that the inventor has developed. The computer programmers may use existing libraries to save time and cost of programming.
The challenges in performing patent examining and patent preparation are highlighted by the unfriendly nature of patent laws as they relate to computer-related inventions. Prior to the 1952 patent act, categories of invention go back to the time of Thomas Jefferson. I would say that Thomas Jefferson did not consider computer-related inventions as a category of invention. The 1952 Patent Act did not substantially deviate from the original subject matter categories, as it specified categories of “process, machine, manufacture, or composition of matter.” The Patent Act of 1790 defined patentable subject matter as “any useful art, manufacture, engine, machine, or device.”
While working in research at the Department of the Navy, we were told by a patent attorney at the Naval Research Lab that we could not obtain a patent on an expert system shell for fault diagnosis because it was computer software. This reaction from the patent attorney was based on patent law that existed in the late 1980’s. During a consultation with the patent attorney, we asked why other research groups, such as those involved with radar, could obtain patents for their work, whereas the work in the Artificial Intelligence could not be patented. Our response was simply that the Supreme Court had held that computer software, i.e., computer algorithms, were not patent eligible. I suspect that this viewpoint was the general understanding of most patent attorneys at that time.
Also, while working in research at the Department of the Navy, most of the work in Artificial Intelligence was in symbolic reasoning. Symbolic reasoning is an approach that models ways that humans think. A common representation used in symbolic reasoning was if-then rules and reasoning using the rules could be by a system of chaining the rules. Often the if-then rules were based on the knowledge of an expert in a certain domain. Hence, work was often performed to develop expert systems. In contrast, research in artificial neural networks, at least in the 1980’s, was considered primarily for those interested in modeling the human brain. In my work in projects that would actually be used on naval ships, artificial neural networks was considered to risky to be of practical use. For example, not enough was known about neural networks to determine if they would actually work in a real world application. At least in the case of expert systems, the rules were transparent. It was unclear to us whether we would be able to show to a program manager what the artificial neural network knew.
In any case, the experience in research in artificial intelligence that I gained in the 1980’s has left me with two issues that have stayed with me throughout my career. One, an Officer in charge of one of my projects made a comment that “why should I fund a project to create a machine that will make mistakes just like people would make.” Two, as I mentioned above, I was told by a patent attorney at the Naval Research Lab, that because the project was computer software, we would not be able to obtain a patent. The result of the later issue was that once funding for the project was cut, one of the researchers left the Navy and took the software for the project with him. That researcher changed the basic algorithm to a different form of statistical model, and subsequently sold the revised computer program back to the Navy. The researcher avoided copyright and knew that we would not be able to obtain a patent.
While working as a patent examiner in the new art unit for Artificial Intelligence, in the 1990’s, major changes were occurring in the art of software patents. The number of patents being applied for and being patented in class 364 200/900 for computer-based inventions was rapidly increasing. Class 395 was created to handle the growth. When I started work in that art unit, the amount of patents in Artificial Intelligence and Robotics filled just a few columns of shoes. Much of my prior art searches were in non-patent literature, as there was much more research being published in this area than there were patents. However, this quickly changed within a couple of years. As primary examiner in the Artificial Intelligence art unit, I was assigned to a reclassification project to create a new class 706 for artificial intelligence based on patents in the computer class 395.
Several persons at the USPTO, such as the Director for art units that examined in class 395, Gerry Goldberg, Stephen Kunin, my own Supervisor Michael Fleming, and others decided to get a handle on computer-related inventions. As a junior examiner, I was asked by my supervisor to brief some of the case law on the issue of patent eligibility under 35 USC 101. Early U.S. Supreme Court decisions that addressed the issue of patentable subject matter [1] [2] painted a picture that computer-related inventions, particularly those inventions related to computer software algorithms, were not patent eligible. However, a closer analysis of this early case law revealed that there are situations were computer-related inventions could be patented. In particular, the U.S. Supreme Court decided that an invention that includes a process being executed by a computer would not preclude patentability of the invention as a whole. [3] Also, several decisions by the U.S. Court of Customs and Patent Appeals appeared to shed some light on the earlier Supreme Court decisions on the issue of patentability of software-related patents. [4] Subsequently, in the Artificial Intelligence art we developed arguments for raising an issue of patentability under 35 U.S.C. §101.
Our rejections under 35 U.S.C. §101 were primarily based on our judgement that a patent application was mainly a description of mathematical formulas, or other theoretical description to broadly cover an aspect of artificial intelligence that may be accomplished sometime in the future. We were generally faced with examining patent applications for conceptual inventions that would require much work to put to practice (e.g., extensive expert knowledge), may be difficult to implement as a computer program for, or were just general concepts. These were tough decisions, but we believed that the issue of patentability under 35 U.S.C. §101 at least had to be raised for the record.
Partly because of the increasing number of computer-related patent applications in light of the three Supreme Court decisions, the USPTO developed a first set of guidelines for patentability of computer-related inventions[5]. Training of examiners was carried out and these initial guidelines were used by patent examiners for guidance in determining whether claims in patent applications for computer-related inventions were directed to patentable subject matter. Our main understanding at that time, was that claims in patent applications directed to mathematical formulas were not patent eligible. The guidelines included a step that asked whether the claims were directed to a practical application. Subsequently, in cases where the patent application disclosed the invention in terms of theorems and mathematical formulas, the claims were generally rejected under 35 USC 101. In most cases, the patent applications related to artificial intelligence were ultimately issued as patents, typically by working with the inventor to limit the claim to a practical application.
Most patent applications that I examined in the early 1990’s were in knowledge representation and reasoning, under the category of knowledge processing system (see class 706, subclass 45), and were geared toward reasoning algorithms. Some success in commercial application of knowledge representation and reasoning occurred in the area of fuzzy logic. Fuzzy logic was considered as a technique for firing an If-Then rule based on a degree of membership in a membership function. A degree of membership can be in a range of 0 to 1, for example, thereby allowing for inexact rules. Fuzzy logic was being used successfully in controllers, both as hardware implementations and in software. Subsequently, a number of companies were filing patent applications in fuzzy logic, particularly for control (see class 706, subclass 1).
In most other areas, because patent applications in artificial intelligence generally related to reasoning algorithms, I found that patent applicants were often reluctant to limit their inventions to a specific application of the algorithm. The applicants believed that their algorithms had a wide range of use. For example, an improved reasoning algorithm for backward chaining might be applicable to many areas that backward chaining may be applied. As an examiner, I would typically have rejected such a broad algorithm under Section 101 as being directed to a mathematical algorithm in order to force the applicant to limit the claimed invention to a specific application.
In a similar manner, patent applications related to artificial neural networks were often described in the form of the mathematical formulas. In particular, often the improvement for an artificial neural network invention was in the training algorithm, and the invention was described in terms of a mathematical formula. Again, I would typically reject this type of patent application to an artificial neural network under Section 101 as being directed to a mathematical algorithm.
As a patent examiner with previous experience in working with artificial neural networks, I found that patent applications for artificial neural networks in the early 1990’s often involved learning algorithms that could be performed by a computer, with the expectation that computer hardware necessary for practical application of the algorithm would exist sometime in the future. Subsequently, I was often faced with an issue of whether the formulas described in the patent application were merely a mathematical algorithm, non-patentable under Section 101, or could be implemented as a practical application. The formulas described in the patent application appeared to be theoretical concepts, and often did not describe a specific neural network architecture and/or any particular practical application of the learning algorithm. Thus, patent applications for artificial neural networks were often rejected under Section 101 for lacking a description of a practical application of a mathematical algorithm.
Fast forward to more recently, however, a form of artificial neural network, referred to as deep learning has become popular partly because of well publicized success in applications such as object recognition (machine vision), speech to text recognition, and game playing on the order of a human. Convolution neural networks, modeled after the human visual cortex, have been successfully applied to object recognition. A deep learning method referred to as long short term memory demonstrated success for speech recognition. In addition, reinforcement learning has been successfully used in playing computer games. The success of deep learning has been partially due to improved algorithms that provide a solution to training a deep neural network with several hidden layers.
In 2013, the USPTO had moved to a new classification system, the Cooperative Patent Classification (CPC) system. Artificial intelligence patents can generally be found in CPC G06N. I performed a review of randomly chosen patents in this class and found that many patent applications that use machine learning primarily describe the practical application that is being patented, and describe the machine learning aspect as generally being any of several known machine learning algorithms. I found only a few patents related to machine learning that are directed to an improvement in a particular machine learning algorithm.
The current legal test for patent eligibility has moved from whether claims are directed to mathematical principles to whether claims are directed to abstract ideas.[6]
The trend toward focusing a patent application on a practical application may be because success in obtaining a patent in machine learning falls mainly on the practical application of machine learning. In particular, the continued importance of claims to a practical application is reflected in the most recent guidelines by the USPTO[7] which indicate that “a claim reciting a judicial exception is now eligible at revised Step 2A unless that exception is not integrated into a practical application of the exception.”
A well-known test for artificial intelligence is the “Turing Test.” The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.
Subsequently, patent applications in the area of artificial intelligence may approach human intelligence in a manner that is indistinguishable from a human. Applications of artificial intelligence including control of autonomous vehicles, robotics, and conversational artificial intelligence may appear to be patent applications on human action and interaction. The availability of high level tools for developing machine learning applications has allowed ease of adoption of the technology by domain experts. Popular tools include SciKit Learn, Pytorch, TensorFlow, and Keras. Still, these emerging areas will continue to challenge existing algorithms, and will continue to be of great commercial interest. Autonomous vehicle control and conversational AI involve systems of several deep learning networks that interact with each other to accomplish a complex task involving continuity, e.g., a continuous conversation or a continuous control of a vehicle. These areas bring into play concepts and issues that were dealt with in earlier work in knowledge representation and reasoning, as well as system aspects such as session management and coordination. Even as artificial intelligence evolves, it has aspects that are based on operations performed in a machine. From the perspective of a patent examiner, engineering solutions to problems related to a practical application should be considered. From the perspective of a patent practitioner, both a domain expert and, if possible, the software development team should be encouraged to describe the technological/engineering features of their invention so that the practical application is the result of engineering.
We will explore the challenges involved in developing machine learning applications in future posts.[1] Gottschalk v. Benson, 409 U.S. 63 (1972)
[2] Parker v. Flook, 437 U.S. 176 (1978)
[3] Diamond v. Diehr, 450 U.S. 175 (1981)
[4] In re Freeman, 573 F.2d 1237 (C.C.P.A. 1978); In re Walter, 618 F.2d 758 (C.C.P.A. 1980); In re Abele, 684 F.2d 902 (C.C.P.A. 1982)
[5] “Examination Guidelines for Computer-Related Inventions.” 1184 OG 87 (March 26, 1996); https://www.uspto.gov/web/offices/com/sol/og/1996/week13/og199613.htm
[6] Bilski v. Kappos, 561 U.S. 593 (210); Mayo v. Prometheus, 566 U.S. 66 (2012); Alice v. CLS Bank 573 U.S. 208 (2014)
[7] October 2019 Update: Subject Matter Eligibility, issued October 17, 2019; https://www.uspto.gov/sites/default/files/documents/peg_oct_2019_update.pdf