EP-4738798-A1 - MACHINE LEARNING PROXY SYSTEM
Abstract
A system is provided that includes a machine learning (ML) application client configured to monitor data streams communicated with a connected device for application data associated with the ML application client, and to capture the associated application data. An ML proxy client is configured to receive an application request from the ML application client, where the application request includes the application data captured by the ML application client; send a proxy request to an ML proxy server, where the proxy request includes the application data and an indicated inference operation to be performed on the application data; receive a proxy response from the ML proxy server, where the proxy response includes a result of the inference operation performed on the application data; and send a report based on the proxy response to a controller outside the first network.
Inventors
- LI, GORDON YONG
- CHEN, XUEMIN
Assignees
- Avago Technologies International Sales Pte. Limited
Dates
- Publication Date
- 20260506
- Application Date
- 20251017
Claims (15)
- An electronic device, comprising: a first network interface configured to send and receive communications on a first network; a second network interface configured to send and receive communications on a second network different from the first network; computer-readable storage media storing one or more sequences of instructions; and processing circuitry configured to execute the one or more sequences of instructions to perform operations comprising: receiving an application request from a machine learning (ML) application client device on the first network, wherein the application request comprises application data captured by the ML application client device from one or more data streams communicated via the first and second networks with a connected device on the first network; sending a proxy request to an ML proxy server device on the first network, wherein the proxy request comprises the application data and an indicated inference operation to be performed on the application data; receiving a proxy response from the ML proxy server device, wherein the proxy response comprises a result of the inference operation performed on the application data by the ML proxy server device; and sending a report based on the proxy response to a controller outside the first network via the second network interface.
- The electronic device of claim 1, wherein the operations further comprise: pre-processing the application data based on parameters associated with the indicated inference operation, wherein the proxy request sent to the ML proxy server device comprises the pre-processed application data.
- The electronic device of claim 1 or 2, wherein the operations further comprise: processing the results of the inference operation received in the proxy response, wherein the report sent to the controller comprises the processed results, in particular, wherein processing the results of the inference operation comprises applying a differential privacy algorithm to the inference results.
- The electronic device of one of the previous claims, wherein the ML proxy server device may be configured to: send a user notification based on the results of the inference operation for audio or visual presentation to a user via one or more peripheral devices in communication with the ML proxy server device; and receive a user response to the presented user notification, wherein the user response is captured via a user interface, in particular, wherein the user interface comprises a microphone and a natural language processor executed using a neural processing unit on the network device.
- The electronic device of one of the previous claims, wherein the operations further comprise: performing a mutual authentication operation with the ML proxy server device.
- The electronic device of one of the previous claims, wherein the ML application client device and the electronic device are a single device, and wherein the operations further comprise: monitoring the one or more data streams communicated with the connected device on the first network for application; and capturing the application data and/or wherein the first network is a local area network operating at a first location, and the second network is a service provider data network providing communications for a plurality of local area networks operating at a plurality of different locations, respectively.
- The electronic device of one of the previous claims, wherein the first network is a virtualized local area network.
- A method, comprising: receiving an application request from a machine learning (ML) application client device on a first network, wherein the application request comprises application data captured by the ML application client device from one or more data streams communicated with a connected device on the first network and an indicated inference operation to be performed on the application data; pre-processing the application data based on parameters associated with the indicated inference operation; sending a proxy request to an ML proxy server device on the first network, wherein the proxy request comprises the pre-processed application data and the indicated inference operation to be performed on the pre-processed application data; receiving a proxy response from the ML proxy server device, wherein the proxy response comprises a result of the inference operation performed on the application data by the ML proxy server device; and sending a report based on the proxy response to a controller on a second network outside the first network.
- The method of claim 8, further comprising: processing the results of the inference operation received in the proxy response, wherein the report sent to the controller comprises the processed results, in particular, wherein processing the results of the inference operation comprises applying a differential privacy algorithm to the inference results.
- The method of claim 8 or 9, further comprising: performing a mutual authentication operation with the ML proxy server device.
- The method of one of claims 8 to 10, further comprising: monitoring the one or more data streams communicated with the connected device on the first network for application data; and capturing the application data.
- A system, comprising: a machine learning (ML) application client configured to be executed on a first device on a first network to: monitor one or more data streams communicated with a connected device on the first network for application data associated with the ML application client; and capture the application data associated with the ML application client; an ML proxy server configured to be executed on a second device on the first network to: initiate and perform an ML inference operation; and an ML proxy client configured to be executed on a third device on the first network to: receive an application request from the ML application client, wherein the application request comprises application data captured by the ML application client from the one or more data streams communicated with the connected device; send a proxy request to the ML proxy server, wherein the proxy request comprises the application data and an indicated inference operation to be performed on the application data; receive a proxy response from the ML proxy server, wherein the proxy response comprises a result of the inference operation performed on the application data by the ML proxy server; and send a report based on the proxy response to a controller outside the first network.
- The system of claim 12, wherein the ML proxy client is further configured to: pre-process the application data based on parameters associated with the indicated inference operation, wherein the proxy request sent to the ML proxy server comprises the pre-processed application data.
- The system of claim 12 or 13, wherein the ML proxy client is further configured to: process the results of the inference operation received in the proxy response, wherein the report sent to the controller comprises the processed results.
- The system of one of claims 12 to 14, wherein the first device is the connected device and/or wherein the first device and the second device are a gateway device configured to communicatively couple the first network and a second network different from the first network.
Description
TECHNICAL FIELD The present description relates in general to service provider data networks including, for example, the use of machine learning applications across service provider data networks. BACKGROUND Service provider data networks may be used for delivering services such as video content, Internet access, telephony, gaming, etc. to subscribers. Service providers increasingly are using applications that incorporate machine learning operations for tasks such as network monitoring and performance evaluation, troubleshooting issues, and improving customer experience. Additionally, machine learning operations are being used to support and enhance the capabilities of Internet of Things (IoT) devices. In a cloud-based approach, data may be collected by devices at various locations within a service provider data network and sent to the cloud for processing, including machine learning operations. However, this cloud-based approach may be inefficient in utilizing the bandwidth and storage capabilities of the network, may increase latency in the applications using the machine learning operations, and may expose sensitive information in the data to parties outside of the service provider data network. BRIEF DESCRIPTION OF THE DRAWINGS Certain features of the subject technology are set forth in the appended claims. However, for purposes of explanation, several aspects of the subject technology are set forth in the following figures. FIG. 1 illustrates an example of a network environment in which aspects of the subject technology may be implemented.FIG. 2 illustrates an example of a routing device according to aspects of the subject technology.FIG. 3 illustrates an example of a gateway device according to aspects of the subject technology.FIG. 4 illustrates an example of a multi-function appliance such as a set top box according to aspects of the subject technology.FIG. 5 is a flowchart illustrating operations of a machine learning proxy system according to aspects of the subject technology.FIG. 6 is a block diagram illustrating data flows associated with the operation of a machine learning application client according to aspects of the subject technology.FIG. 7 is a block diagram illustrating data flows associated with the operation of a machine learning proxy client according to aspects of the subject technology.FIG. 8 is a block diagram illustrating data flows associated with the operation of a machine learning proxy server according to aspects of the subject technology.FIG. 9 is a block diagram illustrating data flows associated with the operation of a machine learning application client and a machine learning proxy client according to aspects of the subject technology.FIG. 10 is a block diagram illustrating data flows associated with the operation of a machine learning proxy client and a machine learning proxy server according to aspects of the subject technology.FIG. 11 is a block diagram illustrating data flows associated with the operation of a machine learning application client, a machine learning proxy client, and a machine learning proxy server in an alternative configuration according to aspects of the subject technology. DETAILED DESCRIPTION The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute part of the detailed description. The detailed description includes specific details for providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and may be practiced without one or more of the specific details. In some instances, structures and components are shown in a block-diagram form to avoid obscuring the concepts of the subject technology. Machine learning (ML) functionality such as neural network processing is being added to many different types of devices. For example, ML systems (e.g., ML processing engine, neural processing unit, hardware accelerator, etc.) are being implemented in customer premises equipment (CPE) such as cable modems, set top boxes, routers, etc. As these ML-enabled edge devices are rolled out across a service provider data network, the service provider is able to take advantage of these edge devices to facilitate and improve the use of ML applications to manage the data network. The subject technology provides solutions that leverage the processing and computational resources of ML-enabled edge devices to service ML applications running local to the edge devices on the service provider data network. According to aspects of the subject technology, an ML-enabled edge device may act as an ML proxy for devices and applications running on a local area network located at a customer's premises. For example, a cable gateway/modem may receive a request for an ML o