CN-121986326-A - Custom interpreters for executing computer code generated by large language models
Abstract
The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating context engine output by utilizing an interpreter that is specially constructed for executing code generated by a large language model. For example, the disclosed system generates executable computer code for responding to queries by utilizing a large language model. Furthermore, the disclosed system executes model-generated computer code using an interpreter integrated with a context engine that also includes interchangeable logic that is interchangeable across multiple executives. Further, the disclosed system may also generate a first context engine output by implementing the interpreter at the first executor as part of executing the computer code with the interpreter. Additionally, the disclosed system may also generate a second context engine output by implementing an interpreter at a second executor.
Inventors
- J.D. JOHNSON
Assignees
- 卓普网盘股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20240823
- Priority Date
- 20231006
Claims (20)
- 1. A computer-implemented method, comprising: In response to a query received from a client device, interacting with a large language model with a context engine to generate executable computer code for responding to the query; Executing computer code with an interpreter integrated with a context engine and comprising interchangeable logic that is interchangeable across multiple executors; Generating a first context engine output by implementing an interpreter to interpret computer code from the context engine to a first executor as part of executing the computer code with the interpreter, and The second context engine output is generated by implementing an interpreter to interpret computer code from the context engine to a second executor as part of executing the computer code with the interpreter.
- 2. The computer-implemented method of claim 1, wherein the interpreter includes explicitly defined types that define properties and functions of the code expression.
- 3. The computer-implemented method of claim 1, further preventing computer code from causing an interpreter to perform file system operations that read, write, or delete data stored for a user account.
- 4. The computer-implemented method of claim 1, wherein executing the computer code with the interpreter further comprises dynamically exposing a type defined within the interpreter corresponding to the computer code.
- 5. The computer-implemented method of claim 1, wherein the interchangeable logic of the interpreter facilitates: Executing computer code at a first executor using an interpreter for testing, and Computer code is executed at the second executor with the interpreter to perform verification without the need to reconstruct the interpreter.
- 6. The computer-implemented method of claim 1, further comprising: generating, with the interpreter, an intermediate context engine output in response to a first interaction from the client device; receiving a second interaction from the client device, and Based on the second interaction, a modified context engine output is generated from the intermediate context engine output using the large language model.
- 7. The computer-implemented method of claim 1, further comprising persisting the state of the interpreter by: During execution of the computer code, storing in a database memory registry data associated with executing the function via the interpreter; Storing a call stack in a database, the call stack including data defining a computer environment of an actuator of the plurality of actuators, and In response to the state persistence event, an instruction pointer is stored in the database indicating a segment of computer code executed by the interpreter.
- 8. The computer-implemented method of claim 7, further comprising resuming execution of the computer code after the persisted state by: obtaining memory registry data, call stacks, and instruction pointers from a database, and Execution of the computer code resumes at the segment indicated by the instruction pointer.
- 9. The computer-implemented method of claim 1, further comprising: Generating, with the interpreter, a first output in response to a first interaction from the client device; persisting the state of the interpreter corresponding to the first output, and In response to a second interaction from the client device, a second output is generated based on persisting a state of an interpreter corresponding to the first output.
- 10. The computer-implemented method of claim 1, further comprising performing in-function garbage collection processing as part of executing computer code.
- 11. A system, comprising: at least one processor, and A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a system to: in response to a query received from a client device, interacting with a large language model with a context engine to generate executable computer code for responding to the query; executing computer code with an interpreter integrated with a context engine and comprising interchangeable logic that is interchangeable across multiple executors; Generating a first context engine output by implementing an interpreter to interpret computer code from the context engine to a first executor for testing as part of executing the computer code with the interpreter, and The second context engine output is generated by implementing an interpreter to interpret computer code from the context engine to a second executor for verification as part of executing the computer code with the interpreter.
- 12. The system of claim 11, wherein the interpreter is unable to process code expressions other than explicitly defined types of attributes and functions defining the code expressions.
- 13. The system of claim 11, further storing instructions that, when executed by the at least one processor, cause the system to utilize the interpreter to prevent the downstream model from performing one or more of performing a network operation, determining context environment data for the first and second executors, or creating a new sub-process.
- 14. The system of claim 11, further storing instructions that, when executed by the at least one processor, cause the system to dynamically expose the type defined within the interpreter by: Exposing call data corresponding to the type as part of executing a function of computer code referencing the call data, and As part of executing the function of the computer code, the non-invoked data corresponding to the type is not exposed.
- 15. The system of claim 11, further storing instructions that, when executed by the at least one processor, cause the system to persist the state of the interpreter by: During execution of the computer code, storing in a database memory registry data associated with executing the function via the interpreter; Storing a call stack in a database, the call stack including data defining a computer environment of an actuator of the plurality of actuators, and In response to the state persistence event, an instruction pointer is stored in the database indicating a segment of computer code executed by the interpreter.
- 16. A non-transitory computer-readable medium storing executable instructions that, when executed by at least one processor, cause the at least one processor to: in response to a query received from a client device, interacting with a large language model with a context engine to generate executable computer code for responding to the query; Executing computer code with an interpreter integrated with a context engine and comprising explicitly defined types of attributes and functions defining code expressions, interchangeable logic that is interchangeable across multiple executors; Generating a first context engine output by implementing an interpreter to interpret computer code from the context engine to a first executor as part of executing the computer code with the interpreter, and The second context engine output is generated by implementing an interpreter to interpret computer code from the context engine to a second executor as part of executing the computer code with the interpreter.
- 17. The non-transitory computer-readable medium of claim 16, further storing instructions that, when executed by the at least one processor, cause the at least one processor to: Performing a first type check process prior to runtime for executing computer code to prevent an interpreter from accessing data other than explicitly defined types, and A second type check process is performed at run-time for executing computer code to prevent the interpreter from accessing data other than explicitly defined types.
- 18. The non-transitory computer-readable medium of claim 16, further storing instructions that, when executed by the at least one processor, cause the at least one processor to: During execution of the computer code, storing in a database memory registry data associated with executing the function via the interpreter; Storing a call stack in a database, the call stack including data defining a computer environment of an actuator of the plurality of actuators, and In response to the state persistence event, an instruction pointer is stored in the database indicating a segment of computer code executed by the interpreter.
- 19. The non-transitory computer-readable medium of claim 16, further storing instructions that, when executed by the at least one processor, cause the at least one processor to perform in-function garbage collection processing as part of executing computer code.
- 20. The non-transitory computer-readable medium of claim 16, further storing instructions that, when executed by the at least one processor, cause the at least one processor to: Generating, with the interpreter, a first output in response to a first interaction from the client device; persisting the state of the interpreter corresponding to the first output, and A second output is generated based on the first output indicated by the state of the persistence interpreter using the large language model.
Description
Custom interpreters for executing computer code generated by large language models Cross Reference to Related Applications The present application claims priority and benefit from U.S. application Ser. No.18/482,715, filed on 6/10/2023, which is incorporated herein by reference in its entirety. Background Advances in computing devices and networking technology have spawned various innovations in machine learning and computer architecture. For example, local and web-based computing systems have been developed that utilize or implement computer code interpreters to execute computer code to produce results from programs written in computer languages. Some existing interpreters can perform various functions and adapt to various functions or programs written in their respective compatible computer languages. However, despite these advances, existing interpreters suffer from a number of drawbacks, particularly in terms of security, efficiency, and flexibility. As just mentioned, some existing computer code interpreters are not secure. More specifically, some interpreters predefine types when executing computer programs, such that large amounts of data (including object definitions and function definitions) may be exposed before any operations that call for the data during execution. Because the types are predefined in this way, some interpreters disclose type data during initialization and prior to execution, risking revealing sensitive data and/or performing unsafe operations. Such unsafe operations include the impermissible reading, writing and deleting of content items, as well as various network operations and creation of new sub-processes performed by the downstream model or system. Not only does this pre-type configuration lead to security problems, but merely pre-executing type checks also introduces stability problems if the computer program calls a function or object that is not present in the pre-defined type (but still passes the initial check), which may lead to crashes and other stability problems. In addition to their security issues, existing computer code interpreters are also inefficient. More specifically, due to the conventional paradigm of fully predefined types, some existing interpreters consume or use a large amount of computing resources (e.g., processing power and memory), which may be preserved by a more efficient interpreter. For example, existing interpreters typically need to compile and recompile code for each new update or change (to a type or other code portion) because upon initialization the type has complete characteristics, the interpreter has no way of knowing which changes actually affect the functions or objects invoked in a particular code segment of a computer program. Existing interpreters further suffer from insufficient flexibility, at least because of their nature of the complete features. In fact, as suggested, some existing interpreters are rigidly fixed to predefined types, thus launching resources for at least some of the defined types that are never even invoked by computer code during execution. In fact, their inflexible nature prevents many existing interpreters from adapting the type as needed, further exacerbating their inefficient use of computing resources. Disclosure of Invention The present disclosure describes embodiments of one or more systems, methods, and non-transitory computer-readable storage media that provide benefits and/or address one or more of the foregoing and other technical problems. For example, the disclosed system executes computer code using a custom interpreter that is built specifically for running code generated by a neural network (especially a large language model). In some cases, the disclosed system utilizes a context engine in conjunction with a large language model to generate computer code. For example, the disclosed system utilizes a context engine and a large language model to generate computer code for performing particular operations or programs executed by an interpreter. In particular, the disclosed system defines and implements custom interpreters tailored for code generated from large language models by integrating specific features, including pre-runtime type checking and runtime type checking, interchangeable logic, and state persistence. Drawings The disclosure will describe one or more exemplary embodiments with additional specificity and detail through reference to the accompanying drawings. The following paragraphs briefly describe the drawings in which: FIG. 1 illustrates a schematic diagram of an example environment of a custom interpreter system in accordance with one or more embodiments; FIG. 2 illustrates an example overview of a custom interpreter system that generates context engine output with an interpreter in accordance with one or more embodiments; FIG. 3 illustrates an example diagram of a custom interpreter system performing static type checking and dynamic type checking in accordance with one or mor