US-20260127691-A1 - MANAGEMENT SYSTEM
Abstract
The disclosed computer-implemented method streamlines the creation of non-provisional patent applications and the response to patent office actions. For generating patent applications, the method automates the extraction and labeling of images from scientific texts, creating comprehensive patent specifications through a language model that integrates claims, descriptions, and drawings. The system offers editing interfaces, supports patentability assessments, and formats the final output to meet various requirements, resulting in ready-to-file patent applications. For office action responses, it involves an automatic analysis of the content, comparison with historical data from similar cases, and crafting arguments using insights from the examiner's record and past successful strategies.
Inventors
- Bao Tran
Assignees
- Bao Tran
Dates
- Publication Date
- 20260507
- Application Date
- 20241102
Claims (20)
- 1 . A method, comprising: receiving by a computer a user query with a document; generating one or more search terms or claims from the document; automatically extracting one or more figures from the document and analyzing the one or more figures and associated figure text using a visual multi-modal large language model (LLM) identification including encoders that embed images and words into vector representations, wherein the visual multimodal LLM is trained on embedded images and words to recognize the one or more figures in the document and annotations in the one or more figures to identify a part list from the one or more figures using the LLM and generating a set of references for at least one figure; performing a search to locate data responsive to the query, the document, the one or more figures and the set of references; chaining the located search data to augment the user query with the document, the one or more figures, and the set of references; and applying the LLM to generate a response based on the one or more search terms or claims for context and describing one or more references in the one or more figures.
- 2 . The method of claim 1 , comprising identifying image objects embedded within the document and saving the objects as separate image files and differentiating the figure caption or text from other content as a brief description of the figure using the LLM.
- 3 . The method of claim 1 , comprising applying a visual language model to the extracted image to identify and label parts within technical drawings and diagrams; generating a brief description by extracting figure captions or nearby text associated with the image; and generating a detailed description section incorporating the extracted images, assigned brief descriptions, and labeled image parts.
- 4 . The method of claim 1 , comprising interacting with the LLM in a browser-based user interface.
- 5 . A method of processing a document, comprising: receiving a query with the document and responding to the query by identifying with a large language model (LLM) one or more decisions or claims in the document, wherein the LLM includes encoders that embed images and words into vector representations; searching and retrieving one or more references applied to the one or more decisions or claims; analyzing one or more references and applying one or more rules to traverse the at least one decision or one claim; and generating a response by chaining the references to the query and providing one or more LLM machine reasoned arguments to traverse the decision or claim, wherein the LLM is a reasoning LLM that generates one or more artificial intelligence context-sensitive texts using a transformer with a decoder that produces a text expansion to provide the context-sensitive text based on first and second texts by applying generative artificial intelligence with normalization and tokenization with zero-shot, one-shot or some-shot generation of the one or more artificial intelligence context-sensitive texts from the first and second texts, wherein the transformer receives a stream of tokens with an attention mask at one or more self-attention layers, and a causal mask is used for the text tokens.
- 6 . The method of claim 5 , wherein the reasoned argument is generated by a combination of a machine language model coupled to one or more machine reasoning models.
- 7 . The method of claim 5 , comprising querying a database to retrieve history data for a decision maker and analyzing the history data in generating the one or more machine reasoned arguments; and searching a database to identify a rejection and applying an argument from the rejection.
- 8 . The method of claim 5 , wherein generating the response comprises: adapting language from a prior application; and modifying an argument from the prior application to address the rejection.
- 9 . The method of claim 10 , comprising using the LLM to optimize licensing of the application with claim charts and a Customer Relationship Management system, further comprising: analyzing with the LLM a set of claims to identify technical features; generating charts based on the claims; implementing a dynamic Mixture of Experts (MoE) system with a FAISS-based gating mechanism to route specific analysis tasks to specialized AI models; decomposing, by the MoE system, the analysis task into sub-tasks and dynamically routing each sub-task to a selected expert model based on semantic similarity; conducting, by the specialized models, an AI-driven analysis by comparing the technical features to products or services in the market; generating using the LLM trained with prior campaign data personalized outreach materials for each prospect; initiating contact with the prospect through integration with email marketing platforms and social media sites; and continuously updating the AI performance by retraining the specialized models using captured data.
- 10 . A method, comprising: receiving a query with a document and extracting one or more uses using a large language model (LLM) including encoders that embed images and words into vector representations; identifying prospects based on a search and chaining the one or more uses and the document to the query as input to the LLM; and crafting tailored outreach materials using the LLM providing one or more artificial intelligence context-sensitive texts using a transformer with a decoder that produces a text expansion to provide the context-sensitive text based on first and second texts by applying generative artificial intelligence with normalization and tokenization with zero-shot, one-shot or some-shot generation of the one or more artificial intelligence context-sensitive texts from the first and second texts, wherein the transformer receives a stream of tokens with an attention mask at one or more self-attention layers, and a causal mask is used for the text tokens.
- 11 . The method of claim 10 , further comprising creating a canvas for the application and the prospects.
- 12 . The method of claim 10 , further comprising identifying a contact list to communicate with the prospects using chat, email, or social media channel.
- 13 . The method of claim 10 , further comprising providing a customer relationship management (CRM) system to track prospects.
- 14 . The method of claim 10 , comprising: receiving, by a computer system, a set of patent claims from a family of patents; analyzing, using natural language processing techniques implemented by the LLM within the computer system, the patent claims to identify key technical features; generating, by the LLM, structured claim charts based on the analyzed patent claims; conducting, by the specialized components of the LLM, an AI-driven infringement analysis by comparing the key technical features to products or services in the market; generating, using the LLM fine-tuned on successful licensing campaigns, personalized outreach materials through integration with marketing platforms and social media sites.
- 15 . A method, comprising: receiving a query and a publication with one or more images; generating one or more search terms or claims from the publication; automatically extracting the one or more images from the publication using a visual large language model (LLM) including encoders that embed images and words into vector representations; automatically assigning brief descriptions to the extracted one or more images based on the one or more images; applying the visual LLM to the extracted one or more images to identify and label reference signs for one or more images; generating claims based on the publication; and applying the language model to generate a specification based on the claims, one or more images, brief descriptions.
- 16 . The method of claim 1 , comprising generating a plan to respond to the query; iteratively searching and analyzing multiple sources related to the query; reasoning about the gathered information to refine the plan; and rendering a response.
- 17 . The method of claim 1 , comprising analyzing the content of the document using a transformer; extracting information from the document; performing a web search to supplement the extracted information; and synthesizing a response that combines data from the document and web search results.
- 18 . The method of claim 1 , comprising analyzing the query using a transformer to determine context and intent; performing a search based on the analyzed query; retrieving information from one or more references; and synthesizing the retrieved information using the LLM to generate a response with citations to the one or more references in the response.
- 19 . The method of claim 1 , comprising providing one or more artificial intelligence context-sensitive texts using a transformer with a decoder that produces a text expansion to provide the context-sensitive text based on first and second texts by applying generative artificial intelligence with normalization and tokenization with zero-shot, one-shot or some-shot generation of the one or more artificial intelligence context-sensitive texts from the first and second texts, wherein the transformer receives a stream of tokens with an attention mask at one or more self-attention layers, and a causal mask is used for the text tokens.
- 20 . The method of claim 1 , comprising receiving, via an application programming interface (API), the query and autonomously analyzing, using the LLM, a plurality of sources to answer the query and generating, based on the analysis, a response across multiple domains.
Description
BACKGROUND OF THE INVENTION This application is related to US20230252224A1 and to Serial ______, the contents of which are incorporated by reference. According to the AUTM Licensing Survey, academic research contributing to new commercial products, with 714 products reaching the market in 2023. Total research expenditures grew to $104 billion in 2023, the highest in the survey's 30-year history, marking a 13% increase from 2022. Federal funding rose more than 10%, while industry support grew 4% over the prior year. License income was reported at $3.6 billion, slightly down from the $3.8 billion peak in 2022. Technologies developed by universities and research institutions led to the launch of 714 new commercial products. Over 25,000 invention disclosures were reported to institutions in 2023, contributing to a cumulative total of more than half a million disclosures. Nearly 3,000 patent licenses, over 3,200 copyright licenses, and more than 1,600 other types of licenses were recorded in 2023. The realm of intellectual property, specifically patents, is a and nuanced process often fraught with procedural intricacies and specialized discourse. Within this space, university researchers and licensing practitioners navigate complex interactions with patent filings where the translation of scientific discoveries into patent applications involves both a deep understanding of the underlying technology and an ability to express highly technical concepts in the confines of a structured legal document. After filing, the inventors encounter office actions that require and informed responses. Further, inventors face difficulty in licensing their inventions. SUMMARY OF THE INVENTION In one aspect, the method involves a computer-based process that receives a scientific publication as input and generates a non-provisional patent application. The process extracts images from the scientific publication, and assigns brief descriptions to these images using associated text from the publication. A multimodal language model is used to identify and label components or reference signs within the images. The system creates patent claims based on the publication's contents, which are then shown to a user for validation. Once the user has edited/approved the draft, a large language model drafts the patent application as guided by the claims and the reference signs in each drawing. The method acts as a copilot under complete control of the user and creates the application based on the user approved/edited claims, images provided by the user in the publication, the brief description of the drawings, and labeled parts or reference signs provided by the user in the publication. The result is a non-provisional patent application including a background section, a summary based on the claims, drawings with descriptions, a detailed description of one or more implementations, and the set of claims, all of which are fully human controlled. In yet another aspect, the method involves a computerized approach to handling a patent office action by receiving the action, analyzing it to identify rejections and cited references, querying a database for the patent examiner's history, finding similar past rejections that involve the same references, reviewing successful arguments that overcame those rejections, formulating a response incorporating these arguments, and then outputting the crafted response. Advantages of one implementation may include one or more of the following: Automation Efficiency: One implementation dramatically increases the efficiency of patent application preparation and office action responses. By automating the initial draft and response processes, one implementation reduces the time patent practitioners spend on repetitive and time-consuming tasks, allowing them to focus on higher-level strategic aspects of patent prosecution. Consistency and Accuracy: By employing historical data on successful arguments and leveraging pre-existing successful response templates, one implementation ensures consistency and improves the accuracy of office action responses. This can potentially lead to higher acceptance rates for patent applications. Reduction of Human Error: The method minimizes the risk of human error that can occur with manual patent drafting and office action analysis. By utilizing natural language processing and machine learning algorithms, one implementation can identify details and generate legal and technical language. Cost-Effectiveness: Automation of the patent drafting and office action response processes may reduce the cost associated with these activities. This reduction in labor time and effort can translate into lower fees for clients seeking patent protection. Enhanced Patent Quality: The systematic approach of one implementation in drafting patent applications can result in higher-quality patents. By ensuring thorough consideration of patent claims and adherence to legal requirements, the technology can mitigate p