Search

JP-7856675-B2 - Model architecture search and hardware optimization

JP7856675B2JP 7856675 B2JP7856675 B2JP 7856675B2JP-7856675-B2

Inventors

  • タオ・ユ
  • クリストバル・アレッサンドリ
  • フランク・ヤウル
  • ウェンジエ・ル
  • シャム・チャンドラセカール・ナムビア

Assignees

  • アナログ ディヴァイスィズ インク

Dates

Publication Date
20260511
Application Date
20220509
Priority Date
20220429

Claims (17)

  1. It is a method, A computer implementation system receives information about hardware resource constraints associated with a pool of processing units , wherein multiple processing units in the pool perform one or more arithmetic operations and one or more signal selections . The computer implementation system receives a dataset associated with a data conversion operation , the data conversion operation being associated with digital pre-distortion (DPD) for pre-distorting input signals to a nonlinear electronic component, and receives the dataset . Training a parameterized power amplifier (PA) model associated with the data transformation operation based on the data set and the information of the hardware resource constraints associated with the pool of processing units, wherein the training includes updating at least one parameter of the parameterized PA model associated with constituting at least a subset of the plurality of processing units in the pool. A method comprising outputting one or more DPD operating configurations for at least the subset of the plurality of processing units in the pool based on the training.
  2. The method according to claim 1, further comprising generating the parameterized PA model, wherein the generation comprises generating a mapping between each of the plurality of processing units in the pool and one of the plurality of differentiable function blocks.
  3. The data conversion operation includes at least a sequence of first and second data conversions, The aforementioned training is The method according to claim 1, comprising calculating a first parameter associated with the first data transformation and a second parameter associated with the second data transformation.
  4. The method according to claim 3, wherein the calculation of the first parameter associated with the first data transformation and the second parameter associated with the second data transformation is further based on the pre-strain backpropagation and loss function.
  5. The method according to claim 3 , wherein the first data conversion or the second data conversion in the sequence is associated with an executable instruction code.
  6. The first data conversion in the sequence includes selecting a memory term from the input signal based on the first parameter, The second data transformation in the sequence includes generating features associated with the nonlinear properties of the nonlinear electronic component using a set of basis functions and the selected memory terms based on the second parameter, The method according to claim 3 , wherein the sequence associated with the data conversion operation further includes a third data conversion, which includes generating a pre-distorted signal based on the features.
  7. The first data conversion in the sequence includes selecting a memory term from the nonlinear electronic component or a feedback signal indicating the output of the input signal, based on the first parameter. The second data transformation in the sequence includes generating features associated with the nonlinear properties of the nonlinear electronic component using a set of basis functions and the selected memory terms based on the second parameter, The method according to claim 3 , wherein the sequence associated with the data conversion operation further includes a third data conversion, which includes updating the DPD coefficients based on the features and the second signal.
  8. Training the parameterized PA model is The method according to claim 7 , further comprising performing backpropagation of pre-strain errors to update the second parameter and generate the set of basis functions.
  9. Outputting one or more of the above configurations means The method according to claim 7 , further comprising outputting one or more configurations that further indicate at least one of the lookup table (LUT) configurations associated with the selection of the memory term or the set of basis functions.
  10. A computer implementation system, Memory containing instructions, A computer processor comprising one or more computer processors, When the instruction is executed by one or more computer processors, the one or more computer processors will: Receiving information on hardware resource constraints associated with a pool of processing units , wherein multiple processing units in the pool perform one or more arithmetic operations and one or more signal selections, and receiving information on hardware resource constraints associated with a pool of processing units. Receiving a dataset associated with a data transformation , wherein the data transformation is associated with at least one of digital pre-distortion (DPD) operation or DPD adaptation for pre-distorting an input signal to a nonlinear electronic component ; Training a parameterized power amplifier (PA) model associated with the data transformation based on the information of the hardware resource constraints associated with the dataset and the pool of processing units, wherein the training of the parameterized PA model includes updating at least one parameter of the parameterized PA model associated with constituting at least a subset of the plurality of processing units in the pool. A computer implementation system that performs an operation including outputting at least one of a DPD operating configuration or a DPD adaptive configuration for at least the subset of the plurality of processing units in the pool, based on the training.
  11. The aforementioned operation is, The computer implementation system according to claim 10, further comprising generating the parameterized PA model by generating a mapping between each of the plurality of processing units in the pool and one of the plurality of differentiable function blocks.
  12. The data conversion includes at least a sequence of a first data conversion and a second data conversion, Training the parameterized PA model is The computer implementation system according to claim 10, comprising calculating a first parameter associated with the first data transformation and a second parameter associated with the second data transformation based on the backpropagation and loss function of the pre - strain .
  13. The subset of the processing unit is One or more digital hardware blocks associated with the first data conversion, The computer implementation system according to claim 12 , comprising one or more processors for executing instruction codes associated with the second data conversion.
  14. A non-temporary computer- readable storage medium that stores instructions , wherein when an instruction is executed by one or more computer processors, the one or more computer processors... Receiving information on hardware resource constraints associated with a pool of processing units , wherein multiple processing units in the pool perform one or more arithmetic operations and one or more signal selections, and receiving information on hardware resource constraints associated with a pool of processing units. To generate a mapping between each of the multiple processing units in the pool and one of the multiple differentiable function blocks, Receiving a dataset associated with a data transformation , wherein the data transformation is associated with digital pre-distortion (DPD) for pre-distorting input signals to a nonlinear electronic component , Training a parameterized power amplifier (PA) model to constitute at least a subset of the plurality of processing units in the pool to perform the data transformation, wherein the training is based on the dataset, information on the hardware resource constraints associated with the pool of processing units, and the mapping, and includes updating at least one parameter of the parameterized PA model associated with constituting at least a subset of the plurality of processing units in the pool. A non-temporary computer- readable storage medium that causes an operation to be performed, which includes outputting one or more DPD operating configurations for at least the subset of the plurality of processing units in the pool , based on the training described above.
  15. The data conversion includes at least a sequence of a first data conversion and a second data conversion, The aforementioned training is The non-temporary computer-readable storage medium according to claim 14, further comprising updating a first parameter associated with the first data transformation and a second parameter associated with the second data transformation based on the pre- strain backpropagation and loss function.
  16. The first data conversion in the sequence includes selecting a memory term from the input signal based on the first parameter, The second data transformation in the sequence includes generating features associated with the nonlinear properties of the nonlinear electronic component using a set of basis functions and the selected memory terms based on the second parameter, The non-temporary computer -readable storage medium according to claim 15 , wherein the sequence further comprises a third data transformation including generating a pre-distorted signal based on the features.
  17. The first data conversion in the sequence includes selecting a memory term from the nonlinear electronic component or a feedback signal indicating the output of the input signal, based on the first parameter. The second data transformation in the sequence includes generating features associated with the nonlinear properties of the nonlinear electronic component using a set of basis functions and the selected memory terms based on the second parameter, The sequence further includes a third data transformation which includes updating the DPD coefficients based on the features and the second signal, The first data conversion and the second data conversion are performed by the subset of the processing unit. The non-temporary computer -readable storage medium according to claim 15 , wherein the third data conversion is performed by executing instruction code on at least another processing unit in the pool.

Description

Cross-reference of Related Applications This application claims priority and interest to U.S. Provisional Patent Application No. 63/187,536, titled "DIGITAL PREDISTORTION FOR POWER AMPLIFIER LINEARIZATION USING NEURAL NETWORKS," filed on 12 May 2021, and to U.S. Non-Provisional Patent Application No. 17/732,809, titled "MODEL ARCHITECTURE SEARCH AND OPTIMIZATION FOR HARDWARE," filed on 29 April 2022, which are incorporated herein by reference in their entirety as described below, and for all applicable purposes. This disclosure generally relates to electronics, and more specifically to the use of model architecture search techniques (e.g., neural architecture search (NAS)) to construct hardware blocks (e.g., digital pre-distortion (DPD) hardware for linearizing power amplifiers). RF systems are systems that transmit and receive signals in the form of electromagnetic waves in the RF range of approximately 3 kilohertz (kHz) to 300 gigahertz (GHz). RF systems are commonly used in wireless communication, with cellular/wireless mobile technology being a prominent example, but they can also be used in cable communications such as cable television. In both of these types of systems, the linearity of the various components within them plays a crucial role. The linearity of RF components or systems, such as RF transceivers, is theoretically easy to understand. That is, linearity generally refers to the ability of a component or system to provide an output signal that is directly proportional to the input signal. In other words, if a component or system is perfectly linear, the relationship between the ratio of the output signal to the input signal is a straight line. Achieving this behavior in actual components and systems is far more complex, and often requires overcoming many challenges to linearity, often at the expense of other performance parameters such as efficiency and/or output power. Power amplifiers (PAs), fabricated from inherently nonlinear semiconductor materials and required to operate at relatively high power levels, are typically the first components analyzed when considering the design of an RF system from a linearity perspective. PA outputs with nonlinear distortion can result in reduced modulation accuracy (e.g., reduced error vector magnitude (EVM)) and/or out-of-band radiation. Therefore, both wireless RF systems (e.g., Long-Term Evolution (LTE) and millimeter-wave or fifth-generation (5G) systems) and cable RF systems have stringent specifications regarding PA linearity. This disclosure provides schematic block diagrams of exemplary radio frequency (RF) transceivers in which parameterized model-based digital pre-distortion (DPD) may be implemented according to several embodiments of this disclosure.This disclosure provides schematic block diagrams of exemplary indirect learning architecture-based DPDs in which parameterized model-based configurations can be implemented according to several embodiments of this disclosure.This disclosure provides schematic block diagrams of exemplary direct learning architecture-based DPDs in which parameterized model-based configurations can be implemented according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of schemes for offline training and online adaptation and operation of indirect learning architecture-based DPDs according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of offline training, online adaptation, and operation of direct learning architecture-based DPDs according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of exemplary implementations of lookup table (LUT)-based DPD actuator circuits according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of exemplary implementations of LUT-based DPD actuator circuits according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of exemplary implementations of LUT-based DPD actuator circuits according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of exemplary software models derived from hardware designs having one-to-one functional mappings, according to several embodiments of this disclosure.This disclosure provides illustrative diagrams of exemplary methods for training parameterized models for DPD operation, according to several embodiments of this disclosure.This disclosure provides schematic diagrams illustrating exemplary parameterized models that model DPD behavior as a sequence of differentiable function blocks, according to several embodiments of this disclosure.This flowchart illustrates exemplary methods for training parameterized models for DPD operation according to some embodiments of the present disclosure.This flowchart illustrates exemplary methods for performing DPD operations for online operation and adaptation according to some embodiments