Search

US-12625865-B2 - Method and system for AI-based interactive searches

US12625865B2US 12625865 B2US12625865 B2US 12625865B2US-12625865-B2

Abstract

A system for interactive searches based on user queries data and a plurality of Large Language Models (LLMs) including a processor of a Human-Machine Interface (HMI) server node configured to host a network of LLMs and at least one machine learning module (ML) and connected to at least one user-entity node over a network and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to: receive a search request input data from the at least one user-entity node; evaluate, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, derive classifying features from the primary and secondary information and generate a feature vector based on the classifying features; ingest the feature vector into the ML module configured to extract additional search parameters from a predictive search model based on historical search data associated with the at least one user-entity node; dissect, by a second dedicated LLM, the search request input data and the additional search parameters to separate the data into qualitative and quantitative criteria elements based on the primary and the secondary information; transform, by a third dedicated LLM, the quantitative criteria elements into structured queries for a database; process, by a fourth dedicated LLM, the qualitative criteria elements by searching through a semi-structured data repository; and synthesize, by the fifth LLM, processed search findings into a succinct human-language summary.

Inventors

  • Dan Ingold
  • William George Lawrence Ingold
  • Charles DiFatta, III

Assignees

  • Dan Ingold
  • William George Lawrence Ingold
  • Charles DiFatta, III

Dates

Publication Date
20260512
Application Date
20241210

Claims (20)

  1. 1 . A system for execution of interactive searches based on processing of user search queries data by Large Language Models (LLMs), comprising: a processor of a Human-Machine Interface (HMI) server node configured to host a network of LLMs and at least one machine learning module (ML) and connected to at least one user-entity node over a network; and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to: receive a search request input data from the at least one user-entity node; evaluate, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, derive classifying features from the primary and secondary information and generate a feature vector based on the classifying features; ingest the feature vector into the ML module configured to extract additional search parameters from a predictive search model based on historical search data associated with the at least one user-entity node; dissect, by a second dedicated LLM, the search request input data and the additional search parameters to separate the data into qualitative and quantitative criteria elements based on the primary and the secondary information; transform, by a third dedicated LLM, the quantitative criteria elements into structured queries for a database; process, by a fourth dedicated LLM, the qualitative criteria elements by searching through a semi-structured data repository; and synthesize, by a fifth LLM, processed search findings into a succinct human-language summary.
  2. 2 . The system of claim 1 , wherein the machine-readable instructions that when executed by the processor, cause the processor to prepare follow-up questions and additional information by the sixth dedicated LLM.
  3. 3 . The system of claim 2 , wherein the machine-readable instructions that when executed by the processor, cause the processor to communicate the succinct human-language summary, the follow-up questions and additional information to the at least one user-entity node to instantiate an interactive dialog with a user associated with the at least one user-entity node.
  4. 4 . The system of claim 3 , wherein the machine-readable instructions that when executed by the processor, cause the processor to dynamically refine the search request input data through ongoing feedback and specific requests using the additional search parameters.
  5. 5 . The system of claim 1 , wherein the machine-readable instructions that when executed by the processor, cause the processor to evaluate the search request input data comprising visual inputs by a dedicated visual input processing LLM.
  6. 6 . The system of claim 5 , wherein the visual inputs are any of: digital image data; and video data.
  7. 7 . The system of claim 5 , wherein the machine-readable instructions that when executed by the processor, cause the processor to infer design attributes from the visual inputs by the dedicated visual input processing LLM.
  8. 8 . The system of claim 5 , wherein the machine-readable instructions that when executed by the processor, cause the processor to supplement a search criterion by the design attributes.
  9. 9 . The system of claim 8 , wherein the machine-readable instructions that when executed by the processor, cause the processor to fill in potential gaps in textual descriptions by the design attributes.
  10. 10 . The system of claim 1 , wherein the machine-readable instructions that when executed by the processor, cause the processor to continuously monitor the search request input data to determine if at least one value of search parameters deviates from a previous value of a corresponding search parameter value by a margin exceeding a pre-set threshold value.
  11. 11 . The system of claim 10 , wherein the machine-readable instructions that when executed by the processor, cause the processor to, responsive to the at least one value of the search parameters deviating from the previous value of the corresponding search parameter by the margin exceeding the pre-set threshold value, generate an updated feature vector for generation of additional search parameters to produce succinct human-language summary by the fifth dedicated LLM.
  12. 12 . The system of claim 1 , wherein the machine-readable instructions that when executed by the processor, further cause the processor to record the synthesized, by the fifth dedicated LLM, the human-language summary along with the search request input data on a permissioned blockchain ledger.
  13. 13 . The system of claim 1 , wherein the machine-readable instructions that when executed by the processor, further cause the processor to run the plurality of LLMs configured to perform fuzzy semantic matching for capturing a broader range of intentions of a user associated with the at least one user-entity node beyond explicit search terms derived from the search request input data.
  14. 14 . A method for execution of interactive searches based on processing of user search queries data by Large Language Models (LLMs), comprising: receiving, by a Human-Machine Interface (HMI) server node configure to host dedicated LLMs, a search request input data from at least one user-entity node; evaluating, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, deriving classifying features from the primary and secondary information and generating, by the HMI server node, a feature vector based on the classifying features; ingesting, by the HMI server node, the feature vector into a machine learning (ML) module configured to extract additional search parameters from a predictive search model based on historical search data associated with the at least one user-entity node; dissecting, by a second dedicated LLM, the search request input data and the additional search parameters to separate the data into qualitative and quantitative criteria elements based on the primary and the secondary information; transforming, by a third dedicated LLM, the quantitative criteria elements into structured queries for a database; processing, by a fourth dedicated LLM, the qualitative criteria elements by searching through a semi-structured data repository; and synthesizing, by a fifth LLM, processed search findings into a succinct human-language summary.
  15. 15 . The method of claim 14 , further comprising preparing follow-up questions and additional information by the sixth dedicated LLM.
  16. 16 . The method of claim 15 , further comprising continuously monitoring the search request input data to determine if at least one value of search parameters deviates from a previous value of a corresponding search parameter value by a margin exceeding a pre-set threshold value.
  17. 17 . The method of claim 16 , further comprising, responsive to the at least one value of the search parameters deviating from the previous value of the corresponding search parameter by the margin exceeding the pre-set threshold value, generate an updated feature vector for generation of additional search parameters to produce succinct human-language summary by the fifth dedicated LLM.
  18. 18 . The method of claim 14 , further comprising dynamically refining the search request input data through ongoing feedback and specific requests using the additional search parameters.
  19. 19 . The method of claim 14 , further comprising inferring design attributes from the visual inputs by the dedicated visual input processing LLM.
  20. 20 . A non-transitory computer-readable medium comprising instructions, that when read by a processor, cause the processor to perform: receiving, by a Human-Machine Interface (HMI) server node configured to host dedicated LLMs, a search request input data from at least one user-entity node; evaluating, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, deriving classifying features from the primary and secondary information and generating, by the HMI server node, a feature vector based on the classifying features; ingesting, by the HMI server node, the feature vector into a machine learning (ML) module configured to extract additional search parameters from a predictive search model based on historical search data associated with the at least one user-entity node; dissecting, by a second dedicated LLM, the search request input data and the additional search parameters to separate the data into qualitative and quantitative criteria elements based on the primary and the secondary information; transforming, by a third dedicated LLM, the quantitative criteria elements into structured queries for a database; processing, by a fourth dedicated LLM, the qualitative criteria elements by searching through a semi-structured data repository; and synthesizing, by a fifth dedicated LLM, processed search findings into a succinct human-language summary.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority to Provisional Patent Application No. 63/606,818entitled “Interactive Multi-Mode Human-Machine Interface for Enhanced Product Searches Using Large Language Models” filed on Ser. No. 12/0612,023 and incorporated herein in its entirety. FIELD OF DISCLOSURE The present disclosure generally relates to product search applications, and more particularly, to an AI-based automated system and method for real-time searches based on analytics of search inputs and predictive analytics of user related-data. BACKGROUND In web-based product searches, a common challenge buyers face is the difficulty of identifying qualified suppliers and consolidating product offerings, particularly in markets characterized by numerous smaller suppliers. Often, these smaller suppliers lack advanced search features on their websites, resulting in subpar search capabilities. On the other hand, large-scale search engines that index these websites tend to yield overwhelming results, making it challenging for users to effectively narrow down their options. While aggregating products from smaller suppliers, even dedicated product search platforms often fall short in supporting qualitative and quantitative search parameters. A prime example of this challenge can be observed in the search for semi-custom-built homes. Local builders who frequently offer such homes typically maintain basic websites without sophisticated search tools. Major real estate search aggregators like the Multiple Listing Service (MLS) or Zillow only list already built or planned homes on specific lots, omitting designs for unbuilt homes. This situation further complicates the search for non-local buyers trying to identify and evaluate builders and home designs. Notably, the search for homes, especially custom- and semi-custom-built ones, requires consideration of a blend of hard criteria (quantitative factors such as the number of bedrooms), soft criteria (qualitative aspects like design features), and visual elements (e.g., roofline style). Current systems, including large-scale real estate search engines, lack these comprehensive capabilities. Accordingly, a system and method for Al-based automated real-time execution of interactive searches based on processing of user search queries data by Large Language Models (LLMs) are desired. BRIEF OVERVIEW This brief overview is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This brief overview is not intended to identify key features or essential features of the claimed subject matter. Nor is this brief overview intended to be used to limit the claimed subject matter's scope. One embodiment of the present disclosure provides a system for interactive searches based on user queries data and a plurality of Large Language Models (LLMs) including a processor of a Human-Machine Interface (HMI) server node configured to host a network of LLMs and at least one machine learning module (ML) and connected to at least one user-entity node over a network and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to: receive a search request input data from the at least one user-entity node; evaluate, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, derive classifying features from the primary and secondary information and generate a feature vector based on the classifying features; ingest the feature vector into the ML module configured to extract additional search parameters from a predictive search model based on historical search data associated with the at least one user-entity node; dissect, by a second dedicated LLM, the search request input data and the additional search parameters to separate the data into qualitative and quantitative criteria elements based on the primary and the secondary information; transform, by a third dedicated LLM, the quantitative criteria elements into structured queries for a database; process, by a fourth dedicated LLM, the qualitative criteria elements by searching through a semi-structured data repository; and synthesize, by the fifth LLM, processed search findings into a succinct human-language summary. Another embodiment of the present disclosure provides a method that includes one or more of: receiving a search request input data from the at least one user-entity node; evaluating, by a first dedicated LLM, relevance of the search request input data by discerning between primary and secondary information; responsive to evaluation by the first dedicated LLM, deriving classifying features from the primary and secondary information and generate a feature vector based on the classifying features; ingesting the feature vector into the ML module configured to extract ad