Search

US-12620322-B1 - Apparatus and method for generating a learning environment comprising an interactive, multi-window graphical user interface

US12620322B1US 12620322 B1US12620322 B1US 12620322B1US-12620322-B1

Abstract

An apparatus and method for generating a learning environment comprising an interactive, multi-window graphical user interface. The apparatus includes at least a processor and a memory communicatively connected to the at least a processor. The memory instructs the processor to generate a graphical user interface, where in the graphical user interface comprises a first window comprising an interactive workspace and a second window communicatively connected to the first window, display the graphical user interface using a downstream device, receive a first query associated with user input, wherein the first query comprises multimodal data, generate return data as a function of processed multimodal data, modify, using a natural language processor, the return data as a function of an attribute of the processed multimodal data to generate a user specific output, and display the user specific output.

Inventors

  • Michael Everest

Assignees

  • edYou Technologies Inc.

Dates

Publication Date
20260505
Application Date
20250203

Claims (20)

  1. 1 . An apparatus for generating a learning environment comprising an interactive, multi-window graphical user interface, wherein the apparatus comprises: at least a computing device, wherein the computing device comprises: a memory; and at least a processor communicatively connected to the memory, wherein the memory contains instructions configuring the at least a processor to: generate a graphical user interface, wherein the graphical user interface comprises: a first window comprising an interactive workspace, wherein the interactive workspace is configured to receive user input comprising one or more of textual data and image data, wherein the first window is configured to provide a drawing input portion, wherein the drawing input portion is configured to receive a drawing input comprising at least user-generated visual elements from a user; and a second window communicatively connected to the first window, wherein the second window comprises an output element that is configured to interface with a machine learning model; display the graphical user interface using a downstream device; receive, through the graphical user interface of the downstream device, a first query associated with user input, wherein the first query comprises multimodal data; generate, using the machine learning model, return data as a function of processed multimodal data which comprises analyzing user's complexity of text input to determine user's educational level; modify, using a natural language processor, the return data as a function of an attribute of the processed multimodal data to generate a user specific output; and display, using the second window of the graphical user interface, the user specific output, wherein displaying the user specific output comprises providing at least a tailored learning support program based on the determined user's educational level and a hint, wherein providing the at least a hint comprises highlighting an error made by the user.
  2. 2 . The apparatus of claim 1 , wherein the multimodal data comprises at least an event corresponding to an event handler, wherein the at least an event comprising one or more of: uploading the image data of the multimodal data; and submitting the textual data of the multimodal data.
  3. 3 . The apparatus of claim 1 , further comprising an application programming interface compatible with the interactive workspace, wherein the application programming interface is configured to: receive and transmit the multimodal data between the downstream device and the apparatus; and provide dynamic updates of the user specific output within the interactive workspace.
  4. 4 . The apparatus of claim 1 , wherein the machine learning model comprises a large language model, wherein the large language model configured to: receive a second query; adjust the user specific output as a function of the second query; and generate an alternative user specific output.
  5. 5 . The apparatus of claim 1 , wherein the first window is configured to provide a text input field, wherein the text input field is configured to receive the textual data from a user.
  6. 6 . The apparatus of claim 5 , wherein the second window is arranged adjacent to the first window, wherein the second window is configured to: display, the user specific output; and dynamically provide updates to the user specific output based on real-time user interactions.
  7. 7 . The apparatus of claim 1 , further comprising a chatbot, wherein the chatbot is configured to: receive a plurality of user queries; respond to the plurality of user queries, wherein responding to the plurality of user queries comprises: retrieving a plurality of user specific data of the user input; analyzing sentiment data of the plurality of user specific data; and generating, using the natural language processor, custom responses in the second window as a function of the user specific data and the sentiment data.
  8. 8 . The apparatus of claim 1 , wherein the machine learning model is iteratively trained using machine learning model training data, wherein the machine learning model training data comprises historical return data corresponding to historical processed multimodal data.
  9. 9 . The apparatus of claim 1 , further configured to generate, using an image processor, the processed multimodal data by identifying, using edge detection techniques, features of the multimodal data.
  10. 10 . The apparatus of claim 1 , wherein the processor is further configured to: receive user feedback; and refine the machine learning model by: identifying patterns in between a target output and the user specific output; calculating a score for the user specific output based on the user feedback and the patterns; and updating a parameter of the machine learning model as a function of the score.
  11. 11 . A method for generating a learning environment comprising an interactive, multi-window graphical user interface, wherein the method comprises: generating a graphical user interface using at least a processor, wherein the graphical user interface comprises: a first window comprising an interactive workspace, wherein the interactive workspace is configured to receive user input comprising one or more of textual data and image data, wherein the first window is configured to provide a drawing input portion, wherein the drawing input portion is configured to receive a drawing input comprising at least user-generated visual elements from a user; and a second window communicatively connected to the first window, wherein the second window comprises an output element that is configured to interface with a machine learning model; displaying the graphical user interface using a downstream device; receiving, through the graphical user interface of the downstream device, a first query associated with user input, wherein the first query comprises multimodal data; generating, using the machine learning model, return data as a function of processed multimodal data which comprises analyzing user's complexity of text input to determine user's educational level; modifying, using a natural language processor, the return data as a function of an attribute of the processed multimodal data to generate a user specific output; and displaying, using the second window of the graphical user interface, the user specific output, wherein displaying the user specific output comprises providing at least a tailored learning support program based on the determined user's educational level and a hint, wherein providing the at least a hint comprises highlighting an error made by the user.
  12. 12 . The method of claim 11 , wherein the multimodal data comprises at least an event corresponding to an event handler, wherein the at least an event comprising one or more of: uploading the image data of the multimodal data; and submitting the textual data of the multimodal data.
  13. 13 . The method of claim 11 , further comprising an application programming interface compatible with the interactive workspace, wherein the application programming interface is configured to: receive and transmit the multimodal data to the downstream device; and provide dynamic updates of the user specific output within the interactive workspace.
  14. 14 . The method of claim 11 , wherein the machine learning model comprises a large language model, wherein the large language model configured to: receive a second query; adjust the user specific output as a function of the second query; and generate an alternative user specific output.
  15. 15 . The method of claim 11 , wherein the first window is configured to provide a text input field, wherein the text input field is configured to receive the textual data from a user.
  16. 16 . The method of claim 15 , wherein the second window is arranged adjacent to the first window, wherein the second window is configured to: display, the user specific output; and dynamically provide updates to the user specific output based on real-time user interactions.
  17. 17 . The method of claim 11 , further comprising a chatbot, wherein the chatbot is configured to: receive a plurality of user queries; respond to the plurality of user queries, wherein responding to the plurality of queries comprises: retrieving a plurality of user specific data of the user input; analyzing sentiment data of the plurality of user specific data; and generating, using the natural language processor, custom responses in the second window as a function of the user specific data and the sentiment data.
  18. 18 . The method of claim 11 , wherein the machine learning model is iteratively trained using machine learning model training data, wherein the machine learning model training data comprises historical return data corresponding to historical processed multimodal data.
  19. 19 . The method of claim 11 , further comprising generating, using an image processor, the processed multimodal data by identifying, using edge detection techniques, features of the multimodal data.
  20. 20 . The method of claim 11 , further comprising: receiving, using the at least a processor, user feedback; and refining, using the at least a processor, the machine learning model by: identifying patterns in between a target output and the user specific output; calculating a score for the user specific output based on the user feedback and the patterns; and updating a parameter of the machine learning model as a function of the score.

Description

FIELD OF THE INVENTION The present invention generally relates to the field of interactive graphical user interfaces. In particular, the present invention is directed to an apparatus and a method for generating a learning environment comprising an interactive, multi-window graphical user interface. BACKGROUND Current learning systems often lack dynamic, interactive environments that effectively integrate multimodal inputs and provide real-time, user-specific outputs tailored to diverse learning needs. Additionally, existing systems fail to offer seamless, multi-window graphical user interfaces that allow users to simultaneously input, process, and visualize educational content, limiting engagement and personalization. SUMMARY OF THE DISCLOSURE In an aspect, an apparatus for generating a learning environment comprising an interactive, multi-window graphical user interface includes at least a processor and a memory communicatively connected to the at least a processor. The memory contains instructions configuring the processor to generate a graphical user interface, where in the graphical user interface comprises a first window comprising an interactive workspace, wherein the interactive workspace is configured to receive user input comprising one or more of textual data and image data and a second window communicatively connected to the first window, wherein the second window comprises an output element that is configured to interface with a machine learning model, display the graphical user interface using a downstream device, receive, through the graphical user interface of the downstream device, a first query associated with user input, wherein the first query comprises multimodal data, generate, using the machine learning model, return data as a function of processed multimodal data, modify, using a natural language processor, the return data as a function of an attribute of the processed multimodal data to generate a user specific output, and display, using the second window of the graphical user interface, the user specific output. In another aspect, a method for generating a learning environment comprising an interactive, multi-window graphical user interface includes generating a graphical user interface, where in the graphical user interface comprises a first window comprising an interactive workspace, wherein the interactive workspace is configured to receive user input comprising one or more of textual data and image data and a second window communicatively connected to the first window, wherein the second window comprises an output element that is configured to interface with a machine learning model, displaying the graphical user interface using a downstream device, receiving, through the graphical user interface of the downstream device, a first query associated with user input, wherein the first query comprises multimodal data, generating, using the machine learning model, return data as a function of processed multimodal data modifying, using a natural language processor, the return data as a function of an attribute of the processed multimodal data to generate a user specific output, and displaying, using the second window of the graphical user interface, the user specific output. These and other aspects and features of non-limiting embodiments of the present invention will become apparent to those skilled in the art upon review of the following description of specific non-limiting embodiments of the invention in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS For the purpose of illustrating the invention, the drawings show aspects of one or more embodiments of the invention. However, it should be understood that the present invention is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein: FIG. 1 is a block diagram of an apparatus for generating a learning environment comprising an interactive, multi-window graphical user interface; FIG. 2 is an exemplary illustration of a graphical user interface which includes a first window and a second window; FIG. 3 is a block diagram of an exemplary machine-learning process; FIG. 4 is a diagram of an exemplary embodiment of a neural network; FIG. 5 is a diagram of an exemplary embodiment of a node of a neural network; FIG. 6 is an exemplary diagram of a cryptographic accumulator; FIG. 7 is a diagram of an exemplary embodiment of a chatbot; FIG. 8 is a block diagram of an exemplary method for generating a learning environment comprising an interactive, multi-window graphical user interface; FIG. 9 is a block diagram of a computing system that can be used to implement any one or more of the methodologies disclosed herein and any one or more portions thereof. The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or