EP-4222937-B1 - A NETWORK DEVICE AND METHOD FOR SWITCHING, ROUTING AND/OR GATEWAYING DATA
Inventors
- GONZALEZ MARINO, Angela
- LI, MING
- FONS LLUIS, Francisco
Dates
- Publication Date
- 20260506
- Application Date
- 20201211
Claims (20)
- A network device (101) for switching, routing and/or gatewaying data among different subnetworks of a communication network (100), wherein the network device (101) comprises: a central processing unit (103); one or more data ingress ports (105a-c) and one or more data egress ports (107a-d) configured to exchange data with a further network device (131) of the communication network (100); and a plurality of hardware co-processors (109), wherein the plurality of hardware co-processors (109) comprises: one or more frame normalization co-processors (111a-c), one or more ingress queuing co-processors (113a-c), one or more filtering and policing co-processors (115a-c), one or more intermediate queuing co-processors (117a-c), at least one gatewaying co-processor (119), one or more egress queuing co-processors (121a-d) and at least one traffic shaping co-processor (123), wherein the central processing unit (103) is adapted to configure and control the one or more data ingress ports (105a-c), the one or more data egress ports (107a-d) and the plurality of hardware co-processors (109) to implement one or more data processing paths in parallel and/or in a pipeline between the one or more ingress ports (105a-c) and the one or more egress ports (107a-d), wherein each of the one or more frame normalization co-processors (111a-c) is configured to convert one or more ingress frames of a given network technology into one or more normalized data link layer frames, wherein a normalized data link layer frame is an open systems interconnection, OSI, layer 2 standard frame that is a network technology-independent and/or protocol-independent data link layer frame, wherein the normalized data link layer frame is structured as a stream of bits comprising a frame header, a frame payload and a frame trailer, wherein after generating the one or more normalized data link layer frames, the plurality of hardware co-processors are configured to process the normalized data link layer frames.
- The network device (101) of claim 1, wherein the one or more ingress ports (105a-c) and the one or more egress ports (107a-d) are based on heterogeneous network technologies including at least two of LIN, CAN 2.0, CAN-FD, CAN-XL, FlexRay, 10Base-T1S, 100Base-T1, 100Base-T, 1000Base-T1, 1000Base-T and/or 10GBase-T.
- The network device (101) of any one of the preceding claims, wherein each of the one or more frame normalization co-processors (111a-c) is configured to convert one or more ingress frames into a stream of bits organized in a set of information fields, being each of these fields of a particular length and codified with a particular meaning according to the given network technology and/or protocol to which the ingress frame belongs to, including LIN, CAN 2.0, CAN-FD, CAN-XL, FlexRay, 10Base-T1S, 100Base-T1, 100Base-T, 1000Base-T1, 1000Base-T and/or 10GBase-T.
- The network device (101) of claim 3, wherein each of the one or more ingress queueing co-processors (113a-c) comprises a memory configured to buffer the one or more data link layer frames.
- The network device (101) of any one of claims 3 to 4, wherein each of the one or more filtering and policing co-processors (115a-c) is configured to parse the frame header and/or the frame payload of the one or more data link layer frames based on one or more matching rules and/or one or more regular expression searches, and filtering, policing and classifying the one or more data link layer frames based on the one or more matching rules and/or the one or more regular expression searches.
- The network device of claim 5, wherein each of the one or more filtering and policing co-processors (115a-c) is further configured to implement a security firewall and/or a network intrusion detection system applied on each of the one or more data link layer frames.
- The network device (101) of claim 5 or 6, wherein each of the one or more filtering and policing co-processors (115a-c) is configured to process the frame header and the payload of the one or more data link layer frames in a parallel mode and/or in a pipeline mode.
- The network device (101) of claims 5, 6 or 7, wherein each of the one or more intermediate queuing co-processors (117a-c) comprises a memory configured to buffer the one or more filtered data link layer frames.
- The network device (101) of claim 8, wherein the at least one gatewaying co-processor (119) is further configured to perform any related action linked to any given positive matching operation performed by the filtering and policing coprocessor (115a-c) on each data link layer frame, including an alert triggering operation, a frame forwarding operation, a frame routing operation, a frame cut-through switching operation, a frame replication operation, a frame elimination operation, a frame encryption/decryption operation, a frame compression/decompression operation, a frame encapsulation/decapsulation operation, a frame tunneling operation, and/or a frame aggregation operation, and/or wherein the at least one gatewaying co-processor (119) is configured to generate a new data link layer frame.
- The network device (101) of claim 9, wherein the at least one gatewaying co-processor (119) is configured to apply one or more gatewaying and/or routing and/or switching operations to the one or more filtered data link layer frames in a parallel mode and/or in a pipeline mode.
- The network device (101) of claim 9 or 10, wherein each of the one or more egress queuing co-processors (121a-d) comprises a memory configured to buffer the one or more data link layer frames provided by the at least one gatewaying co-processor (119).
- The network device (101) of any one of claims 3 to 11, wherein the at least one traffic shaping co-processor (123) is configured to control the provisioning of the one or more data link layer frames to the one or more egress ports (107a-d).
- The network device (101) of claim 12, wherein the at least one traffic shaping co-processor (123) is configured to perform any frame shaping related action on each data link layer frame per egress port, including a time aware shaping operation, a credit-based shaping operation, an asynchronous traffic shaping operation, a cyclic queueing shaping operation and/or a frame pre-emption operation.
- The network device (101) of claim 13, wherein the at least one traffic shaping co-processor (123) is configured to apply one or more traffic shaping operations to the one or more filtered data link layer frames in a parallel mode and/or in a pipeline mode.
- The network device (101) of any one of preceding claims, wherein the network device (101) further comprises a communication bus and wherein the central processing unit (103) is configured to communicate with the plurality of hardware co-processors (109) via the communication bus for implementing a control path through which it is possible to exchange instruction frames.
- The network device (101) of claim 15, wherein for each of the one or more data link layer frames the plurality of hardware co-processors (109) are configured to exchange an instruction frame via the communication bus, wherein the instruction frame comprises one or more commands for processing the respective data link layer frame and wherein the communication bus is configured to provide the respective instruction frame to the respective co-processor synchronously with the respective data link layer frame.
- The network device (101) of claims 15 or 16, wherein each data link layer frame which moves internally across the network device (101), is processed by the plurality of hardware co-processors (109) from a data plane, while, in its turn, in parallel and at the same time, its associated instruction frame is handled from a control plane, moving from one coprocessor stage to the next coprocessor stage synchronously to the movement, back and forth, of its related data link layer frame.
- The network device (101) of claim 17, wherein the control plane of the network device (101) is implemented by the central processing unit (103) and/or a finite state machine, FSM, and/or an arithmetic logic unit, ALU, implemented in hardware inside the one or more co-processors from the data plane as distributed controllers of the control plane responsible for executing the instruction frame associated to each data link layer frame that moves across the data plane.
- The network device (101) of claims 15, 16, 17 or 18, wherein each data link layer frame moving through the data plane across the plurality of hardware co-processors has an associated instruction frame that moves through the control plane.
- The network device (101) of claims 15, 16, 17, 18 or 19, wherein each instruction frame comprises a header, a payload and a trailer, organized in a set of fields and commands to instruct the one or more co-processors of the data plane about one or more operations to be performed.
Description
TECHNICAL FIELD In general, the present invention relates to communication networks. More specifically, the present invention relates to devices and methods for switching, routing and/or gatewaying data among different subnetworks of a communication network. BACKGROUND Network gatewaying is, by nature, a complex and demanding processing task, especially in the automotive field where many heterogeneous in-vehicle network technologies and protocols coexist, for example, CAN 2.0, CAN FD, LIN, FlexRay, or Ethernet. Software-based approaches for network gatewaying, despite a de facto option today, may not be the best choice to guarantee real-time performance, particularly in terms of latency, jitter, bandwidth and/or throughput for next-generation autonomous-driving vehicles. In light of above, there is a need for an improved device and method for switching, routing and/or gatewaying data among different subnetworks of a communication network in an efficient manner. US 2019/141133 A1 discloses a network gateway in a vehicle connects heterogeneous networks and buses within the vehicle. The gateway implements hardware acceleration to accomplish protocol translation, e.g., between CAN, LIN, Flexray, and Ethernet buses and networks. In particular, the gateway provides hardware accelerated packet filtering, header lookup, and packet aggregation features. US 2020/259765 A1 discloses a network forwarding IC with packet processing pipelines, at least one of which includes a parser, a set of match-action stages, and a deparser. The parser is configured to receive a packet and generate a PHV including a first number of data containers storing data for the packet. A first match-action stage is configured to receive the PHV from the parser and expand the PHV to a second, larger number of data containers storing data for the packet. Each of a set of intermediate match-action stage is configured to receive the expanded PHV from a previous stage and provide the expanded PHV to a subsequent stage. A final match-action stage is configured to receive the expanded PHV and reduce the PHV to the first number of data containers. The deparser is configured to receive the reduced PHV from the final match-action stage and reconstruct the packet. SUMMARY It is an object of the invention to provide an improved network device and method for switching, routing and/or gatewaying data among different subnetworks of a communication network in an efficient manner. The foregoing and other objects are achieved by the subject matter of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures. Generally, embodiments of the invention relate to a novel concept of an efficient communication gatewaying method and device. That is, embodiments of the invention enable an efficient design and development of a complete building methodology via hardware/software codesign to synthesize a communication gateway as a functional product. Further, as outcome of this methodology, the resultant physical gateway device is responsible for performing a full set of communication features, mainly the forwarding, encapsulation and related processing of Protocol Data Units (PDUs) or data frames among different networks of a given networking infrastructure, and fulfilling also another set of requirements related to reliability or functional safety and also cyber security, being all the required algorithms embedded and performed inside such a gateway device. Embodiments of the invention are suitable and deployable across many industries and use cases: from generic information and communication technology or enterprise networks to smart manufacturing networks, Internet of things (IoT) networks or even automotive in-vehicle networks. More specifically, according to a first aspect the invention relates to a network device for switching, routing and/or gatewaying data among different subnetworks of a communication network, wherein the network device comprises: a central processing unit; one or more data ingress ports and one or more data egress ports; and a plurality of hardware co-processors. The one or more data ingress ports and the one or more data egress ports are configured to exchange data with a further network device of the communication network, and the plurality of hardware co-processors comprises one or more frame normalization co-processors, one or more ingress queuing co-processors, one or more filtering and policing co-processors, one or more intermediate queuing co-processors, at least one gatewaying co-processor, one or more egress queuing co-processors and at least one traffic shaping co-processor, wherein the central processing unit is adapted to configure and control the one or more data ingress ports, the one or more data egress ports and the plurality of hardware co-processors to implement one or more data processing paths in parallel and/or in a pipeline between the one or more ingress por