Search

US-12626099-B2 - Using deep learning models to obfuscate and optimize communications

US12626099B2US 12626099 B2US12626099 B2US 12626099B2US-12626099-B2

Abstract

Concepts and technologies are disclosed herein for using deep learning models to obfuscate and optimize communications. A request can be received in a first language, from a user device, and at a first computing device storing a first neural network. The request can be translated using the first neural network into a modified request in a custom language. The modified request can be sent to a second computing device hosting an application. The first computing device can receive a modified response that is in the custom language, where the modified response can be created at the second computing device using the second neural network and based on a response from the application. The modified response can be translated into a response in the first language and sent to the user device.

Inventors

  • William R. Trost
  • Daniel Solero
  • Brian Miles

Assignees

  • AT&T INTELLECTUAL PROPERTY I, L.P.

Dates

Publication Date
20260512
Application Date
20230126

Claims (20)

  1. 1 . A system comprising a processor and a memory that stores computer-executable instructions that, when executed by the processor, cause the processor to perform operations comprising: creating, using a first neural network and a second neural network, a custom language using sample data comprising requests and responses that are associated with an application executed by a second computing device, wherein the first neural network and the second neural network create the custom language based on a structure of the requests and the responses, wherein the first neural network is deployed to a first computing device after the first neural network is trained to communicate using the custom language, and wherein the second neural network is deployed to the second computing device after the second neural network is trained to communicate using the custom language; sending, by the first computing device and to the second computing device, a modified request in the custom language, wherein the modified request is based on a request in a first language from a user device, and wherein the request in the first language requests an operation to be performed by the application; receiving, by the first computing device and from the second computing device, a response in the custom language, wherein the response is generated by the application; and sending, by the first computing device and to the user device, a modified response in the first language, wherein the modified response is based on the response in the custom language, wherein the second computing device receives the modified request, translates the modified request into an application call using the second neural network, passes the application call to the application, receives an application response from the application, and translates the application response into the modified response using the second neural network.
  2. 2 . The system of claim 1 , wherein the request in the first language is received via an application programming interface that is exposed by the first computing device.
  3. 3 . The system of claim 2 , wherein the modified request is sent to the second computing device via the application programming interface.
  4. 4 . The system of claim 2 , wherein the first neural network and the second neural network are trained by a training application using the sample data.
  5. 5 . The system of claim 4 , wherein the training application provides the sample data, and wherein the training application also provides a set of application programming interface definitions to the first computing device.
  6. 6 . The system of claim 5 , wherein the application programming interface is based on the set of application programming interface definitions.
  7. 7 . The system of claim 1 , wherein the sample data further comprises an initialization vector that is applied by the first neural network and the second neural network to modify the custom language.
  8. 8 . The system of claim 1 , wherein the modified request obfuscates contents of the request for the operation, and wherein the modified response obfuscates contents of the response.
  9. 9 . A method comprising: creating, using a first neural network and a second neural network, a custom language using sample data comprising requests and responses that are associated with an application executed by a second computing device, wherein the first neural network and the second neural network create the custom language based on a structure of the requests and the responses, wherein the first neural network is deployed to a first computing device after the first neural network is trained to communicate using the custom language, and wherein the second neural network is deployed to the second computing device after the second neural network is trained to communicate using the custom language; sending, by the first computing device and to the second computing device, a modified request in the custom language, wherein the modified request is based on a request in a first language from a user device, and wherein the request in the first language requests an operation to be performed by the application; receiving, by the first computing device and from the second computing device, a response in the custom language, wherein the response is generated by the application; and sending, by the first computing device and to the user device, a modified response in the first language, wherein the modified response is based on the response in the custom language, wherein the second computing device receives the modified request, translates the modified request into an application call using the second neural network, passes the application call to the application, receives an application response from the application, and translates the application response into the modified response using the second neural network.
  10. 10 . The method of claim 9 , wherein the request in the first language is received via an application programming interface that is exposed by the first computing device.
  11. 11 . The method of claim 10 , wherein a training application that provides the sample data also provides a set of application programming interface definitions to the first computing device.
  12. 12 . The method of claim 11 , wherein the application programming interface is based on the set of application programming interface definitions.
  13. 13 . The method of claim 9 , wherein the first neural network and the second neural network are trained by a training application using the sample data.
  14. 14 . A computer storage medium having computer-executable instructions stored thereon that, when executed by a processor, cause the processor to perform operations comprising: creating, using a first neural network and a second neural network, a custom language using sample data comprising requests and responses that are associated with an application executed by a second computing device, wherein the first neural network and the second neural network create the custom language based on a structure of the requests and the responses, wherein the first neural network is deployed to a first computing device after the first neural network is trained to communicate using the custom language, and wherein the second neural network is deployed to the second computing device after the second neural network is trained to communicate using the custom language; sending, by the first computing device and to the second computing device, a modified request in the custom language, wherein the modified request is based on a request in a first language from a user device, and wherein the request in the first language requests an operation to be performed by the application; receiving, by the first computing device and from the second computing device, a response in the custom language, wherein the response is generated by the application; and sending, by the first computing device and to the user device, a modified response in the first language, wherein the modified response is based on the response in the custom language, wherein the second computing device receives the modified request, translates the modified request into an application call using the second neural network, passes the application call to the application, receives an application response from the application, and translates the application response into the modified response using the second neural network.
  15. 15 . The computer storage medium of claim 14 , wherein the first neural network and the second neural network are trained by a training application using the sample data.
  16. 16 . The computer storage medium of claim 14 , wherein the request in the first language is received via an application programming interface that is exposed by the first computing device.
  17. 17 . The computer storage medium of claim 16 , wherein a training application that provides the sample data also provides a set of application programming interface definitions to the first computing device.
  18. 18 . The computer storage medium of claim 17 , wherein the application programming interface is based on the set of application programming interface definitions.
  19. 19 . The computer storage medium of claim 14 , wherein the sample data further comprises an initialization vector that is applied by the first neural network and the second neural network to modify the custom language.
  20. 20 . The computer storage medium of claim 14 , wherein one of the modified request or the modified response comprises data that communicates a change in the custom language.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of and claims priority to U.S. patent application Ser. No. 16/792,998, entitled “Using Deep Learning Models to Obfuscate and Optimize Communications,” filed Feb. 18, 2020, now U.S. Pat. No. 11,568,209, which is incorporated herein by reference in its entirety. BACKGROUND With the rapid growth of data-based networks in the United States and abroad, the usefulness and pervasiveness of data (obtained from data-based networks) has increased dramatically. “Big data” involves the compilation of data from one or more sources and the use of the data to learn about the subject of that data. For example, a network user's purchasing history, movements, and/or other information may be collected from various sources and used to learn about that user. Additionally, the exchange of sensitive information in digital format has become commonplace. In modern networks, traffic associated with the most sensitive of topics may traverse networks along with relatively mundane data. For example, sensitive health information, financial information, and/or other information may traverse a network with social media messages, and the like. To protect sensitive information from interception and/or other unauthorized access, encryption or other security techniques sometimes may be used to protect information. As the world attempts to enter the quantum computing age, however, the ability to access encrypted information without authorization may soon be a reality. In particular, some encryption techniques rely on the inability of others to “crack the code” to protect information. As quantum computing becomes a reality, computers may soon be able to defeat such technologies rapidly and the failure of such technologies therefore may become commonplace. As a result, traditional data protection technologies may be approaching the end of their useful lives, with future data protection technologies being needed to protect data from unauthorized use and/or disclosure. SUMMARY The present disclosure is directed to using deep learning models to obfuscate and optimize communications. A training application can pass a set of sample data to two or more neural networks, for example in a computing environment. The sample data can correspond to a set of requests and responses and can be used to train the neural networks to create a custom language for communicating with one another to pass information for requests and responses. In some embodiments, the sample data can include an initialization vector to help obscure the protocol that is to be developed by the neural networks. The training application also can send a set of application programming interface (“API”) definitions to a computing device or other device as illustrated and described herein. The computing device can create an API using the API definitions, where the API can be called by a requestor (e.g., the user device) to create a request for an application. Once trained, the neural networks can be deployed to two or more devices that are to communicate with one another to create a request for the application and to obtain a response from the application. In one contemplated embodiment, a first neural network can be deployed to a computing device and a second neural network can be deployed to a server computer that can host the application. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. A device (e.g., the user device) can create a request via interactions with the API of the computing device. The computing device can receive the request, translate the request into a custom language developed by the neural networks, and pass the modified request to the server computer that hosts the application. The server computer can receive the modified request, translate the modified request into the request or an equivalent application call using the second neural network, and pass the application call to the application. The application can output a response from the application, the second neural network can translate the response into a modified response, and the server computer can send the modified response to the computing device. The computing device can receive the modified response, translate the modified response into the response using the first neural network, and provide the response to the requestor (e.g., the user device). Thus, it can be appreciated that the user device can create the request and receive the response without any other devices between the computing device and the server computer obtaining the request and/or the response. In various embodiments, the neural networks can be configured to evolve the custom language during use of the custom language to improve security or for other reasons. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According