JP-7855572-B2 - Highly deterministic latency in distributed systems
Inventors
- アミカンギオーリ・アンソニー・ディー
- バスト・アレン
- ローゼン・ビー・ジョシュア
- ジュハス・クリストフ
Assignees
- ハイアニス・ポート・リサーチ・インコーポレーテッド
Dates
- Publication Date
- 20260508
- Application Date
- 20210805
- Priority Date
- 20200807
Claims (20)
- A system comprising multiple gateways connected to receive inbound messages from two or more participant devices, wherein one or more of the gateways are: For a predetermined inbound message, a time-based value (TBV) is determined according to the arrival time of the predetermined inbound message . The predetermined inbound message, along with its TBV, is forwarded to one or more compute nodes. A response message containing the same TBV as that included in the predetermined inbound message is received from one or more compute nodes. The desired transmission time is determined by calculating the computational combination of deterministic latency and the TBV. To one or more of the participant devices, a reply message dependent on the response message is sent as an outbound message transmitted at the desired transmission time. The system is further structured in this way.
- The system according to claim 1, wherein the TBV includes a timestamp corresponding to the reception time of the predetermined inbound message, and the desired transmission time is determined after receiving the response message from one or more compute nodes .
- The system according to claim 1, wherein the TBV includes the desired transmission time of the reply message, and the desired transmission time is determined before the predetermined inbound message is forwarded to one or more compute nodes .
- In the system described in claim 1 , further, A packet scheduler configured to receive the aforementioned response message, having a series of indexed locations, each associated with a desired transmission time, A system comprising, wherein the TBV is a value that depends on the value of the associated indexed location for the desired transmission time.
- The system according to claim 1, wherein the TBV is inserted into an unused field in the predetermined inbound message and then forwarded to one or more compute nodes.
- The system according to claim 1, wherein the TBV is added as part of a field of the system's internal protocol within the predetermined inbound message.
- The system according to claim 1, wherein the deterministic latency depends on the maximum time it takes for one or more gateways to receive a reply message from one or more compute nodes.
- The system according to claim 1, wherein the deterministic latency follows a pattern that is both variable and deterministic.
- A system according to claim 1, wherein the deterministic latency is selected from a series of latencies uniformly distributed within a predetermined range.
- A system according to claim 1, wherein the deterministic latency is set for each gateway, each connection, or for the entire system.
- A system according to claim 1, wherein the deterministic latency changes dynamically depending on system conditions.
- A system according to claim 1, wherein two or more gateways each receive the response message from one or more compute nodes.
- The system according to claim 1, further comprising the following steps: the predetermined inbound message is forwarded to one or more compute nodes, and the response message is received from one or more compute nodes via a plurality of direct connections established between each of the one or more gateways and each of the one or more compute nodes.
- In the system according to claim 1, the response message relates to a transaction match event between two participant devices and two matching parties associated with each of them, and the gateway is The reply message is sent simultaneously to the two participant devices as an outbound message at the desired transmission time. The system is further configured in this way.
- The system according to claim 14 , wherein the outbound message is also simultaneously transmitted at the desired transmission time as a market data event message to a device associated with a subscriber to the market data stream.
- In the system according to claim 1, the one or more gateways are Asynchronous messages are received from at least one of the aforementioned compute nodes. The asynchronous message is sent as an outbound message to two or more participant devices. The system is further configured in this way.
- The system according to claim 1, wherein the TBV further depends on a time value relating to at least one of the message path delay and compute node delay.
- In the system according to claim 1, the one or more gateways The predetermined inbound message is forwarded to one or more sequencer nodes along with its TBV. The system is further configured in this way.
- In the system described in claim 1, further, The one or more compute nodes mentioned above are: The system is configured to receive the predetermined inbound message, along with its TBV, from one or more gateways. A system in which one or more compute nodes are further configured to send the response message, along with the TB V , back to one or more gateways.
- In the system according to claim 1, the one or more gateways are further configured to forward the predetermined inbound message, along with its TBV, to one or more sequencer nodes. A system in which one or more sequencer nodes are configured to forward the predetermined inbound message, along with its TBV, as a sequence-marked message to one or more compute nodes.
Description
Related applications This application claims priority to U.S. Patent Application No. 16/988,249, concurrently pending, filed 7 August 2020, entitled "Highly Deterministic Latency in a Distributed System," and to U.S. Patent Application No. 16/988,491, concurrently pending, filed 7 August 2020, entitled "Sequencer Bypass with Transactional Preprocessing in Distributed System," with the entire contents of each application incorporated herein by reference. This application relates to connected devices, and more specifically, to the provision of deterministic latency. Currently, financial trading systems widely used on major stock exchanges allow traders to electronically place orders and receive confirmations, market data, and other information via communication networks. A typical electronic trading system typically includes a matching engine residing in a central server, multiple gateways providing access to the matching engine, and distributed processors. A typical order process might look like this: a request message indicating an order (e.g., a buy order and/or sell order) is sent from a client device (e.g., a trader terminal operated by a human user or a server running an automated trading algorithm) and received. Typically, an order acknowledgment is then sent back to the client device via the gateway that forwarded the request. After further processing by the exchange, an order processing confirmation may be sent back to the client device. Furthermore, the exchange system may generate market data output by disseminating information regarding order messages to other systems, either in its original form or in a different format. Generally, latency refers to the time between a system input and a visible response. In the context of communication systems, latency is measured as the difference between the time a message is input to or received by the system and the time a corresponding response message is sent. In high-speed electronic trading systems, where minimizing the time to execute a transaction is desirable, latency is a critical consideration. In a solution overview titled "Determinism is the New Latency" (published in 2019) by Arista Networks, Inc. (Non-Patent Literature 1), one approach to controlling latency is described as a "speed bump" approach, which introduces a delay of approximately 350 microseconds by lengthening the optical fiber in the message path. This ensures that every order takes exactly the same amount of time to pass through the fiber. Another approach described in the same document is that frequently used transaction data may be kept in the matching engine's cache memory to minimize latency. The document also touches upon the problems associated with trading systems that transfer orders to multiple matching engines using multiple gateways. Participants may be assigned gateways, which also provides further grounds for the non-determinism discussion. It is pointed out that if the time required for order processing by gateways is not deterministic, two orders sent to the exchange in a certain order may actually be executed in a different order. However, no solutions to these problems have been proposed. U.S. Pre-grant Publication 2019/0097745 (Patent Document 1) describes a communication network that uses timestamps to reduce the effects of non-deterministic delay. The state of the transmission path is estimated by observing the "non-deterministic" delay of previously transmitted packets. The transmission circuit then holds the packet until deterministic latency occurs in the downlink packet at the packet processing circuit. Schweitzer Engineering Laboratories, Inc.'s ICON Packet Transport (published in 2016) (Non-Patent Document 2) is an example of a network device that performs deterministic, low-latency packetization using a jitter buffer. U.S. Patent 7,496,086 (Patent Document 2) describes a voice network comprising a series of gateways that equalize delay using a jitter buffer. U.S. Patent 7,885,296 (Patent Document 3) assigns a timestamp to a frame and maintains synchronization between multiple timestamp counters distributed across different physical layer (PHY) transceivers. U.S. Pre-grant Publication 2018/0359195 (Patent Document 4) describes a network switch that uses a special type of tree data structure for identifying the timestamp range of received packets, which may be used for streaming media in a Real-Time Transmission Protocol (RTP) network. U.S. Patent Application Publication No. 2019/097745U.S. Patent No. 7,496,086U.S. Patent No. 7,885,296U.S. Patent Application Publication No. 2018/359195 "Determinism is the New Latency", Solution Brief (c) 2019 by Arista Networks, Inc.ICON Packet Transport, by Schweitzer Engineering Laboratories, Inc. (c) 2016 This is a higher-order block diagram of a decentralized electronic trading system.This diagram shows messages traveling via a direct path from the gateway to the compute node, and messages traveling via the sequencer node