Search

BR-102025006596-A2 - SYSTEMS AND METHODS FOR LOW-LATENCY CONTROL OF DEVICES

BR102025006596A2BR 102025006596 A2BR102025006596 A2BR 102025006596A2BR-102025006596-A2

Abstract

Some modalities refer to systems and methods for low-latency control. A communication system may include an application operating on the first device. The application is configured to: 1) classify a first packet provided to the first device as being for reception by a second low-latency device or 2) classify the first packet as being for use in a low-latency operation. The application is configured to provide the first packet to a first queue associated with a low-latency path.

Inventors

  • RAJESH SHANKARRAO MAMIDWAR
  • Fabian Russo

Assignees

  • AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED

Dates

Publication Date
20260317
Application Date
20250402
Priority Date
20240430

Claims (20)

  1. 1. A system characterized by comprising: a first device; and an application operating on the first device, the application being configured to: 1) classify a first packet provided to the first device as being for reception by a second low-latency device or 2) classify the first packet as being for use in a low-latency operation and the application being configured to provide the first packet to a first queue associated with a low-latency path.
  2. 2. System according to claim 1, characterized in that the application is configured to determine latency data associated with communication through the first device, wherein the application comprises a server extension operating on a device within a residence.
  3. 3. System, according to claim 2, characterized in that the application is configured to communicate latency data to a remote server from the first device via a virtual communication link, the remote server from the device being a central server communicating via a cloud network, or in that the application is configured to communicate latency data to a remote server from the first device, the remote server from the device being an internet service provider server communicating via an internet service provider network.
  4. 4. System, according to claim 1, characterized in that the application is configured to classify a second packet provided to the first device as being for reception by a third low-latency device and provide the second packet to a second queue associated with the low-latency path, wherein the second queue has a lower priority queue than the first queue.
  5. 5. System, according to claim 1, characterized in that the application is configured to classify the first packet as being for reception by the second low-latency device.
  6. 6. System according to claim 1, characterized in that the application is configured to classify the first packet as being for use in low-latency operation.
  7. 7. System, according to claim 1, characterized in that the application is configured to classify a third packet as being for use in a non-low latency application and provide the third packet to a third queue associated with a non-low latency path.
  8. 8. System, according to claim 1, characterized in that the application is configured to classify a third packet as being for use in a non-low latency application and provide the third packet to a third queue associated with a non-low latency path, wherein the application is configured to classify a second packet provided to the first device as being for reception by a third low latency device and provide the second packet to a second queue associated with the low latency path, wherein the second queue has a lower priority queue than the first queue.
  9. 9. System according to claim 1, characterized in that the first device is a decoder, a cable modem or a wireless router and serves clients within a residence associated with the first device, wherein the application is configured to receive updates from a remote server from the first device via a virtual communication link, the remote server from the device being a central server in communication via a cloud network, or wherein the application is configured to receive updates from the remote server from the first device, the remote server from the first device being an internet service provider server in communication via an internet service provider network.
  10. 10. A non-transient, computer-readable medium characterized in that it has instructions stored therein which, when executed by a processor, cause the processor to: determine whether a first low-latency device is offline or whether the first low-latency device is performing a low-latency operation; and allocate bandwidth reserved for the first low-latency device to a second low-latency device if the first low-latency device is offline or the first low-latency device is not performing the low-latency operation, wherein the first low-latency device and the second low-latency device are part of a communication system comprising a cable, fiber optic, or wireless network.
  11. 11. Non-transient computer-readable medium according to claim 10, characterized in that the processor is disposed on a remote server from the first low-latency device and the second low-latency device.
  12. 12. Non-transient, computer-readable medium according to claim 11, characterized in that the server is in communication with the internet service provider's infrastructure.
  13. 13. Non-transient computer-readable medium according to claim 10, characterized in that the first low-latency device comprises a decoder, a cable modem or a wireless router.
  14. 14. Non-transient computer-readable medium according to claim 10, characterized in that the first low-latency device is in communication with a fiber optic router.
  15. 15. Non-transient computer-readable medium according to claim 10, characterized in that the first low-latency device is in communication with a cable modem.
  16. 16. Method for providing low-latency service, the method characterized by comprising: classifying a first packet provided to a first device as being for reception by the second low-latency device or as being for use in a low-latency operation; and providing the first packet to a first queue associated with a low-latency path.
  17. 17. A method according to claim 16, characterized in that it further comprises: classifying a second packet provided to the first device as being for reception by a third low-latency device and providing the first packet to a second queue associated with the low-latency path, wherein the second queue has a lower priority queue than the first queue.
  18. 18. A method according to claim 16, characterized in that it further comprises: classifying a third packet as being for use in a non-low latency application and providing the third packet to a third queue associated with a non-low latency path.
  19. 19. A method according to claim 16, characterized in that the first device comprises an application configured to: 1) classify the first packet provided to the first device as being for reception by a second low-latency device or 2) classify the first packet as being for use in the low-latency operation and the application being configured to provide the first packet to the first queue associated with the low-latency path.
  20. 20. Method according to claim 16, characterized in that the first device is a decoder, a cable modem or a wireless router.

Description

Dissemination Field [001] This disclosure generally refers to communication systems and methods, including, but not limited to, communications associated with Internet service provider (ISP) networks, cable modems, gigabit passive optical networking (GPON) devices, set-top boxes, televisions, user devices, Ethernet network devices, and/or wireless devices. Some embodiments of the disclosure relate to latency-related monitoring, analysis, and/or optimization for such communications and/or control of devices and networks for low-latency operation. Background to the Disclosure [002] Latency issues in communications between a home network and an ISP can lead to various challenges and disruptions in internet connectivity and user experience, especially for evolving low-latency uses, including but not limited to video conferencing, cloud gaming, augmented reality/virtual reality (AR/VR) applications, and metaverse applications. [003] ISPs are companies that provide internet access to individuals and businesses. ISPs typically own, lease, and manage a network infrastructure that connects users to the internet. This infrastructure may include various components such as data centers, routers, switches, coaxial cables, and fiber optic cables. ISPs obtain internet connectivity from larger networks, such as backbone providers or internet exchange points (IXPs), and distribute communication services to their customers. [004] Latency can be associated with one or more parties (e.g., cloud providers, ISPs, application developers, and silicon providers) and one or more devices and networks, including but not limited to ISP networks, cable modems, GPON devices, set-top boxes, WiFi networks, Ethernet networks, access networks, backbone networks, and cloud infrastructure. To support internet speeds, ISPs are using data communications in larger bursts, which generally require larger buffers at each node. Larger bursts/buffers can increase communication latencies. [005] Latency can manifest as slow response times (e.g., when loading web pages, streaming videos, or downloading files), reduced quality of real-time applications (e.g., low-latency applications that rely on real-time communication, such as video conferencing, Voice over IP (VoIP) calls, and online games), buffering and interruptions in streaming, unstable connections, adverse impact on cloud-based services (e.g., file storage, email, and productivity tools, affecting productivity and efficiency), increased vulnerability to cyberattacks, and limited capacity for interactive applications (e.g., limiting the effectiveness of interactive applications that require real-time user input, such as online collaborative tools, virtual classrooms, and remote desktop applications). High latency can result in unstable video/audio playback, slow conversations, and delayed reactions in online games, resulting in a poor user experience and communication difficulties. High latency results in data packets arriving out of order or being delayed, leading to pauses in playback and degradation of streaming quality. High latency can give attackers more time to exploit security vulnerabilities and launch malicious attacks, such as distributed denial-of-service (DDoS) attacks or man-in-the-middle (MitM) attacks. Brief Description of the Drawings [006] Several objects, aspects, attributes and advantages of the disclosure will become more apparent and better understood by referring to the detailed description taken in conjunction with the accompanying drawings, in which reference characters identify corresponding elements throughout. In the drawings, similar reference numbers generally indicate identical, functionally similar and/or structurally similar elements. [007] FIG. 1 is a general schematic block diagram of a communication system according to some modalities; FIG. 2 is a general schematic block diagram of a portion of the communication system illustrated in FIG. 1 according to some modalities; FIG. 3 is a general schematic block diagram of applications in communication with cloud infrastructure for the communication system illustrated in FIG. 1 according to some modalities; FIG. 4 is a general schematic flow diagram of an operation for the communication system illustrated in FIG. 1 according to some modalities; FIG. 5 is a general schematic flow diagram of an operation for the communication system illustrated in FIG. 1 according to some modalities; and FIG. 6 is a schematic block diagram of the communication system illustrated in FIG. 1, including a server configured for augmented reality/virtual reality and/or metaverse applications according to some modalities; FIG. Figure 7 is a schematic block diagram of a portion of the communication system illustrated in Figure 1 showing operations using Precision Time Protocol (PTP) protocol according to some modes; Figure 8 is a schematic flow diagram for the communication system illustrated in Figure 1 showing operations for monitoring and/or