Search

US-12626437-B1 - Method and system for generating a digital experience

US12626437B1US 12626437 B1US12626437 B1US 12626437B1US-12626437-B1

Abstract

Disclosed is method for generating digital experience (DE) using system. The method includes receiving, in dialogue field (DF) of user interface (UI) rendered on first user device, first input signal (IS), for generating first structured output (FSO); rendering first canvas element (FCE) including information related to FSO at first location in canvas workspace (CW) of UI; receiving second IS in DF, for generating second SO (SSO); rendering second CE (SCE) including information related to SSO at second location in CW; analyzing, by spatial grammar interpreter of server system (SS), information in FCE and SCE with relative spatial arrangement, for generating semantic representation (SR) of user intent; constructing, by intent analysis pipeline of SS, intent graph (IG) based on SR; generating, by code generation service of SS, DE based on IG, rendering preview of DE on UI.

Inventors

  • Benjamin Dickenson
  • Anton Gauffin

Assignees

  • OÜ BeyondOS

Dates

Publication Date
20260512
Application Date
20251014

Claims (15)

  1. 1 . A method for generating a digital experience using a system, the method comprising: receiving, in a dialogue field of a user interface rendered on a first user device, a first input signal and generating a first structured output based on the first input signal; rendering, in a canvas workspace of the user interface, a first canvas element including information related to the first structured output at a first location within the canvas workspace; receiving, in the dialogue field, a second input signal and generating a second structured output based on the second input signal; rendering, in the canvas workspace, a second canvas element including information related to the second structured output at a second location within the canvas workspace; analyzing, by a spatial grammar interpreter of a server system, the information contained in the first and second canvas elements together with their relative spatial arrangement, for generating a semantic representation of user intent, wherein the analyzing comprises: detecting proximity between the first and second canvas elements; determining hierarchy based on relative vertical or horizontal positioning of the first and second canvas elements; determining containment when one of the first and second canvas elements is spatially enclosed within a boundary of another; constructing, by an intent analysis pipeline of the server system, an intent graph based on the semantic representation; and generating, by a code generation service of the server system, the digital experience based on the intent graph and rendering a preview of the digital experience on the user interface.
  2. 2 . The method according to claim 1 , wherein analyzing by the spatial grammar interpreter further comprises: generating multi-dimensional intent vectors representing spatial relationships between the first and second canvas elements; and applying confidence scoring based on spatial coherence, wherein the semantic representation of user intent is generated based on the detected proximity, the hierarchy, the containment, the intent vectors, and the confidence scoring.
  3. 3 . The method according to claim 1 , wherein the canvas workspace further comprises at least one relational indicator defining a relationship between the first and second canvas elements, and wherein the analyzing by the spatial grammar interpreter further comprises interpreting the relational indicator as indicating whether: the first canvas element is a master element and the second canvas element is a dependent element; the second canvas element is a master element and the first canvas element is a dependent element; or the first and second canvas elements have equal weighting in the semantic representation of user intent.
  4. 4 . The method according to claim 1 , wherein a second user device is connected to the server system through a communication network, and wherein the method further comprises: rendering the user interface including the canvas workspace on the second user device; receiving, from the second user device, at least one additional input signal and generating a further structured output based on the at least one additional input signal; rendering, in the canvas workspace, a further canvas element including information related to the further structured output, the further canvas element being positionable relative to the first and second canvas elements; analyzing, by the spatial grammar interpreter of the server system, the further canvas element together with the first and second canvas elements to update the semantic representation of user intent; and updating, by the code generation service of the server system, the digital experience rendered on the user interface, using the updated semantic representation.
  5. 5 . The method according to claim 4 , wherein either the first user device or the second user device is used to: move any of the first, second, or further canvas elements within the canvas workspace to a new spatial location; and/or edit textual information contained in any of the first, second, or further canvas elements, for updating the canvas workspace, wherein the movement and/or the text editing is communicated to the server system through the communication network, and the spatial grammar interpreter reanalyzes the updated canvas workspace to regenerate or modify the semantic representation of user intent.
  6. 6 . The method according to claim 5 , wherein the second user device is operated by an artificial intelligence agent or the artificial intelligence agent is embodied within the canvas workspace as a contributing entity, and wherein the artificial intelligence agent is configured to perform at least one of: create one or more additional canvas elements in the canvas workspace; move any of the canvas elements or the additional canvas elements to new spatial locations; or edit information contained in any of the canvas elements, wherein each contribution made by the artificial intelligence agent is communicated to the server system and incorporated into the analysis by the spatial grammar interpreter for updating the semantic representation of user intent.
  7. 7 . The method according to claim 6 , wherein the artificial intelligence agent operates according to a predefined specialization defining a contribution domain within the canvas workspace, the predefined specialization being selected from at least one of: user interface and user experience design; data logic and integration; content generation; game design or level design; visual asset creation; and user interaction behavior, wherein the artificial intelligence agent contributes canvas elements, modifications, or annotations corresponding to its predefined specialization, and the contributions are analyzed together with at least the first and second canvas elements, for updating the semantic representation of user intent.
  8. 8 . The method according to claim 1 , wherein constructing the intent graph based on the semantic representation comprises: generating, for each canvas element, a plurality of intent nodes representing elements, concepts, patterns, or requirements derived from the semantic representation; defining, between the plurality of intent nodes, a plurality of intent edges representing explicit, implicit, or derived relationships corresponding to a spatial grammar interpretation; assigning to each intent node and each intent edge a confidence value and a priority level based on spatial coherence and content relevance; and detecting recurring patterns within the intent graph to refine or expand the semantic representation of user intent.
  9. 9 . The method according to claim 1 , wherein generating the digital experience based on the intent graph comprises: transforming the intent graph into a structured generation prompt defining functional, visual, and logical requirements; selecting a generation strategy corresponding to a type of digital experience to be produced; generating an application structure, component definitions, behavioral logic, and associated data models based on the structured generation prompt; assembling the generated components into an executable representation of the digital experience; and rendering, in the user interface, the executable representation as the preview of the generated digital experience for user inspection and further modification.
  10. 10 . A system for generating a digital experience, the system comprising: a first user device; and a server system connected to the first user device through a communication network, the server system including a language model and a software development environment, wherein a user interface rendered on the first user device, the user interface comprising a dialogue field configured to receive input signals and to generate structured output based on the input signals, and a canvas workspace configured to render a first canvas element including information related to a first structured output at a first location within the canvas workspace, wherein the first structured output is generated based on a first input signal received in the dialogue field, and to render a second canvas element including information related to a second structured output at a second location within the canvas workspace, wherein the second structured output is generated based on a second input signal received in the dialogue field; a spatial grammar interpreter of the server system configured to analyze the information contained in the first and second canvas elements together with their relative spatial arrangement to generate a semantic representation of user intent by; detecting proximity between the first and second canvas elements; determining hierarchy based on relative vertical or horizontal positioning of the first and second canvas elements; determining containment when one of the first and second canvas elements is spatially enclosed within a boundary of another; an intent analysis pipeline of the server system is configured to construct an intent graph based on the semantic representation; and a code generation service of the server system is configured to generate the digital experience based on the intent graph and to render a preview of the digital experience on the user interface.
  11. 11 . The system according to claim 10 , wherein the spatial grammar interpreter is further configured to: generate multi-dimensional intent vectors representing spatial relationships between the first and second canvas elements; and apply confidence scoring based on spatial coherence.
  12. 12 . The system according to claim 10 , wherein the canvas workspace further comprises at least one relational indicator defining a relationship between the first and second canvas elements, and wherein the spatial grammar interpreter is further configured to interpret the relational indicator as indicating whether: the first canvas element is a master element and the second canvas element is a dependent element; the second canvas element is a master element and the first canvas element is a dependent element; or the first and second canvas elements have equal weighting in the semantic representation of user intent.
  13. 13 . The system according to claim 10 , further comprising a second user device connected to the server system through the communication network, wherein the user interface including the canvas workspace is rendered on the second user device, and wherein: the second user device is configured to provide at least one additional input signal to generate a further structured output; the canvas workspace is configured to render a further canvas element including information related to the further structured output, the further canvas element being positionable relative to the first and second canvas elements; the spatial grammar interpreter of the server system is configured to analyze the further canvas element together with the first and second canvas elements to update the semantic representation of user intent; and the code generation service of the server system is configured to update the digital experience rendered on the user interface, using the updated semantic representation.
  14. 14 . The system according to claim 13 , wherein the second user device is operated by an artificial intelligence agent or the artificial intelligence agent is embodied within the canvas workspace as a contributing entity, and wherein the artificial intelligence agent is configured to perform at least one of: create one or more additional canvas elements in the canvas workspace; move any of the canvas elements or the additional canvas elements to new spatial locations; or edit information contained in any of the canvas elements, wherein each contribution made by the artificial intelligence agent is communicated to the server system and incorporated into the analysis by the spatial grammar interpreter to update the semantic representation of user intent.
  15. 15 . A computer program product comprising program code stored on a non-transitory computer-readable medium, the program code being executable by at least one processor of a server system and/or a user device to generate a digital experience using a system by: receiving, in a dialogue field of a user interface rendered on a first user device, a first input signal and generating a first structured output based on the first input signal; rendering, in a canvas workspace of the user interface, a first canvas element including information related to the first structured output at a first location within the canvas workspace; receiving, in the dialogue field, a second input signal and generating a second structured output based on the second input signal; rendering, in the canvas workspace, a second canvas element including information related to the second structured output at a second location within the canvas workspace; analyzing, by a spatial grammar interpreter of a server system, the information contained in the first and second canvas elements together with their relative spatial arrangement, for generating a semantic representation of user intent, wherein the analyzing comprises: detecting proximity between the first and second canvas elements; determining hierarchy based on relative vertical or horizontal positioning of the first and second canvas elements; determining containment when one of the first and second canvas elements is spatially enclosed within a boundary of another: constructing, by an intent analysis pipeline of the server system, an intent graph based on the semantic representation; and generating, by a code generation service of the server system, the digital experience based on the intent graph and rendering a preview of the digital experience on the user interface.

Description

TECHNICAL FIELD The present disclosure relates to a method for generating a digital experience using a system. The present disclosure further relates to a system for generating a digital experience based on user input. The present disclosure also relates to a computer program product for generating a digital experience based on user input. BACKGROUND The field of digital experience generation has evolved toward increasingly automated and intelligent systems that enable users to design, prototype, and deploy software applications with minimal technical intervention. Conventional user interface builders and design tools often rely on manual drag-and-drop workflows that require precise positioning and predefined templates. However, such approaches are time-consuming, error-prone, and poorly suited for adaptive or large-scale generation of user interface. Existing conversational assistants have introduced natural language interactions for application design, yet these conversational assistants typically lack spatial awareness and cannot interpret a layout or relationships among visual elements presented on a design canvas. As a result, semantic inconsistencies frequently arise between a user's verbal descriptions and the spatial representations used in the user interface. In modern Artificial Intelligence (AI)-assisted design environments, attempts have been made to couple large language models with visual prototyping tools. Typically, these implementations interpret text commands to generate interface components but fail to integrate the spatial representations or maintain a persistent semantic model of the design context. Other conventional systems focus on generating code directly from textual input without providing an interactive visual medium that allows users to refine or visualize the evolving digital experience. This creates a disconnect between human conceptualization and machine interpretation, leading to inefficient iteration cycles and ambiguous outcomes. Furthermore, many current solutions depend heavily on predefined component libraries and rule-based logic that limit their adaptability to novel design intents. Current tools seldom maintain a consistent representation of user intent across multiple modalities verbal, textual, and spatial. Consequently, users must repeatedly clarify their goals through trial-and-error interactions, resulting in fragmented workflows and limited creativity. Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks. SUMMARY The aim of the present disclosure is to provide a method, a system and a computer program product to enable the generation of a digital experience through intelligent interpretation of user input and spatial context, thereby addressing the limitations of existing systems that lack semantic integration between user intent and executable application design. The aim of the disclosure is achieved by the method, the system and a computer program product as defined in the appended independent claims to which reference is made. Advantageous features are set out in the appended dependent claims. The embodiments of the present disclosure substantially enable to improve the accuracy, coherence, and responsiveness of digital experience generation by bridging natural language understanding with spatial reasoning. Additional aspects, advantages, features, and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow. Throughout the description and claims of this specification, the words “comprise” and “contain” and variations of the words, for example “comprising” and “comprises”, mean “including but not limited to”, and do not exclude other components, integers or steps. Moreover, the singular encompasses the plural unless the context otherwise requires: in particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein: FIG. 1 is a schematic illustration of a system, in accordance with an embodiment of the present disclosure; FIG. 2 shows an illustration of an exemplary user interface rendered on a first user device or a second user device of FIG. 1, in accordance with an embodiment of the present disclosure; and FIGS. 3A and 3B collectively illustrate steps of a method for generating a digital experience using a system, in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. In a first aspect, the present disclosure provides