Yesterday in my patent prosecution course, students turned to AI tools to help them draft patent claims. None of the AI-proposed claims were ready for prime-time, but they served as a useful starting point as the students organized their thoughts. More and more attorneys are turning to these same AI tools to help them be more productive and efficient while delivering a higher quality work product. It is tough, for instance, to read all the prior art. AI tools can help mine the references for potential obviousness problems — and provide a pin cite to the key language in the art.
USPTO Director Vidal recently released a new memorandum concerning the use of artificial intelligence (AI) in patent office proceedings. directorguidance-aiuse-legalproceedings . The memo recognizes that AI tools can be powerful both for applicants and for USPTO examiners. But, AI tools cannot be used to avoid ethical duties. The memo thus provides firm guidance that existing ethics rules on candor and misconduct apply even when AI tools are used to generate legal filings and evidence. This comes on the heels of several high-profile cases of “AI hallucination” outside of the PTO context, where language models like ChatGPT produced false information that lawyers presented as fact or law.
For example, submissions to the USPTO generally require a signature, and by affixing a signature, the signatory-who has to be a person-certifies, among other things, that “All statements made therein of the party’s own knowledge are true,” that “all statements made therein on information and belief are believed to be true,” that “after an inquiry reasonable under the circumstances” any “legal contentions are warranted by existing law” or “by a nonfrivolous argument for the extension … or reversal of existing law,” and that “factual contentions have evidentiary support” or likely will have evidentiary support after a reasonable opportunity for discovery.
Quoting from USPTO Rule 11.18. This rule is based directly on the Federal Rule of Civil Procedure , Rule 11 that has been applied in the AI context. Dir. Vidal goes on to highlight a few particular circumstances:
Simply assuming the accuracy of an AI tool is not a reasonable inquiry.
A submission (including an AI-generated or Al-assisted submission) that misstates facts or law could also be construed as a paper presented for an improper purpose because it could “cause unnecessary delay or needless increase in the cost of any proceeding before the Office.”
Etc.
Be careful out there everyone!