US-20260128740-A1 - GATE DRIVER SYSTEMS AND RELATED METHODS
Abstract
Implementations of a system configured for operation of a field effect transistor may include a gate driver coupled with a memory and a microcontroller unit and a plurality of analog to digital converters, the gate driver configured to be coupled with a gate of a field effect transistor where the gate driver may be configured to generate a drive signal with at least two levels for the gate of the field effect transistor. The drive signal with at least two levels may be generated using a deep reinforcement learning agent and data associated with one or more parameters of the field effect transistor.
Inventors
- Vijay B. Rentala
- Steven Gray
- Scott Allen
Assignees
- SEMICONDUCTOR COMPONENTS INDUSTRIES, LLC
Dates
- Publication Date
- 20260507
- Application Date
- 20251031
Claims (20)
- 1 . A system configured for operation of a field effect transistor, the system comprising: a gate driver coupled with a memory and a microcontroller unit and a plurality of analog to digital converters, the gate driver configured to be coupled with a gate of a field effect transistor; wherein the gate driver is configured to generate a drive signal with at least two levels for the gate of the field effect transistor, the drive signal with at least two levels generated using a deep reinforcement learning agent and data associated with one or more parameters of the field effect transistor.
- 2 . The system of claim 1 , wherein the gate driver is configured to be coupled with a telecommunication network and wherein the telecommunication network is configured to be operatively coupled with a cloud computing system which comprises the deep reinforcement learning agent and uses the deep reinforcement learning agent and the data associated with the one or more parameters of the field effect transistor to train the deep reinforcement learning agent and then transmit the deep reinforcement learning agent across the telecommunication network to the gate driver for storing in the memory.
- 3 . The system of claim 1 , wherein the deep reinforcement learning agent is comprised in the memory and the gate driver is configured to use the deep reinforcement learning agent and the data associated with one or more parameters of the field effect transistor to generate the drive signal.
- 4 . The system of claim 1 , wherein the deep reinforcement learning agent is comprised in the memory and is configured to communicate with a cloud computing system over a telecommunication network coupled with the gate driver where the cloud computing system is configured to use the data associated with one or more parameters of the field effect transistor to train the deep reinforcement learning agent to generate the drive signal with at least two levels associated with an operating area for the field effect transistor.
- 5 . The system of claim 1 , wherein the deep reinforcement learning agent is one of a deep Q-network, a double deep Q-network, an Actor-Critic agent, a policy gradient agent, a Monte Carlo tree search agent, an imitation learning agent, or any combination thereof.
- 6 . The system of claim 1 , wherein the deep reinforcement learning agent is trained using a deep neural network and a Markov decision process.
- 7 . A system configured for operation of a field effect transistor, the system comprising: a first gate driver and a second gate driver, the first gate driver and the second gate driver each coupled with a memory and a corresponding plurality of analog to digital converters, the first gate driver configured to be coupled with a gate of a first field effect transistor and the second gate driver configured to be coupled with a gate of a second field effect transistor; wherein the first gate driver is configured to generate a first drive signal with at least two levels for the gate of the first field effect transistor, the first drive signal with at least two levels generated using a first deep reinforcement learning agent and data associated with one or more parameters of the first field effect transistor; and wherein the second gate driver is configured to generate a second drive signal with at least two levels for the gate of the second field effect transistor, the second drive signal with at least two levels generated using a second deep reinforcement learning agent and data associated with one or more parameters of the second field effect transistor.
- 8 . The system of claim 7 , wherein the first deep reinforcement learning agent and the second deep reinforcement learning agent are the same deep reinforcement learning agent.
- 9 . The system of claim 7 , wherein each of the first gate driver and the second gate driver is configured to be coupled with a telecommunication network and wherein the telecommunication network is configured to be operatively coupled with a cloud computing system which comprises the first deep reinforcement learning agent and second deep reinforcement agent and wherein the cloud computing system: uses the first deep reinforcement learning agent and the data associated with the one or more parameters of the first field effect transistor to train the first deep reinforcement learning agent and then transmits the first deep reinforcement learning agent across the telecommunication network to the first gate driver for storing in the memory; and uses the second deep reinforcement learning agent and the data associated with the one or more parameters of the second field effect transistor to train the second deep reinforcement learning agent and then transmits the second deep reinforcement learning agent across the telecommunication network to the second gate driver for storing in the memory.
- 10 . The system of claim 7 , wherein: the first deep reinforcement learning agent is comprised in the memory and the first gate driver is configured to use the first deep reinforcement learning agent and the data associated with one or more parameters of the first field effect transistor to generate the first drive signal; and the second deep reinforcement learning agent is comprised in the memory and the second gate driver is configured to use the second deep reinforcement learning agent and the data associated with one or more parameters of the second field effect transistor to generate the second drive signal.
- 11 . The system of claim 7 , wherein: the first deep reinforcement learning agent is comprised in the memory and is configured to communicate with a cloud computing system over a telecommunication network coupled with the first gate driver where the cloud computing system is configured to use the data associated with one or more parameters of the first field effect transistor to train the first deep reinforcement agent to generate the first drive signal with at least two levels associated with an operating area for the first field effect transistor; and the second deep reinforcement learning agent is comprised in the memory and is configured to communicate with a cloud computing system over a telecommunication network coupled with the second gate driver where the cloud computing system is configured to use the data associated with one or more parameters of the second field effect transistor to train the second deep reinforcement agent to generate the second drive signal with at least two levels associated with an operating area for the second field effect transistor.
- 12 . The system of claim 7 , wherein the first deep reinforcement learning agent or the second deep reinforcement learning agent is one of a deep Q-network, a double deep Q-network, an Actor-Critic agent, a policy gradient agent, a Monte Carlo tree search agent, an imitation learning agent, or any combination thereof.
- 13 . The system of claim 7 , wherein the first deep reinforcement learning agent or second deep reinforcement agent is trained using a deep neural network and a Markov decision process.
- 14 . A method of training a deep reinforcement learning agent used during operation of a field effect transistor, the method comprising: providing a gate driver coupled with a memory, a microcontroller unit, and a plurality of analog to digital converters, the gate driver coupled with a gate of a field effect transistor; transmitting data from the plurality of analog to digital converters associated with one or more parameters of the field effect transistor to a cloud computing system; using the data, training a deep reinforcement learning agent; transmitting the deep reinforcement learning agent to the memory; and using the deep reinforcement learning agent, generating a drive signal with at least two levels for the gate of the field effect transistor.
- 15 . The method of claim 14 , wherein training the deep reinforcement learning agent includes training at least partially on the gate driver itself using the microcontroller unit.
- 16 . The method of claim 14 , wherein training the deep reinforcement learning agent includes training only with the cloud computing system.
- 17 . The method of claim 14 , wherein training the deep reinforcement learning agent includes training only on the gate driver itself using the microcontroller unit.
- 18 . The method of claim 14 , wherein the deep reinforcement learning agent is one of a deep Q-network, a double deep Q-network, an Actor-Critic agent, a policy gradient agent, a Monte Carlo tree search agent, an imitation learning agent, or any combination thereof.
- 19 . The method of claim 14 , wherein training the deep reinforcement learning agent further comprises training using a deep neural network and a Markov decision process.
- 20 . The method of claim 14 , wherein training the deep reinforcement learning agent further comprises where training defines an operating area for the field effect transistor.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This document claims the benefit of the filing date of U.S. Provisional Patent Application 63/715,484, entitled “Gate Driver Systems and Related Methods” to Vijay B. Rentala which was filed on Nov. 1, 2024, the disclosure of which is hereby incorporated entirely herein by reference. BACKGROUND 1. Technical Field Aspects of this document relate generally to semiconductor devices used to control gates of various other semiconductor devices. Particular implementations also include gate driver systems for silicon carbide semiconductor devices. 2. Background Various semiconductor devices have been devised that work by controlling flow of electricity. A wide variety of systems that include such semiconductor devices have been developed to allow integration of semiconductor devices with electrical equipment. Control systems utilize these semiconductor devices as part of a process of directing the operation of the electrical equipment. SUMMARY Implementations of a system configured for operation of a field effect transistor may include a gate driver coupled with a memory and a microcontroller unit and a plurality of analog to digital converters, the gate driver configured to be coupled with a gate of a field effect transistor where the gate driver may be configured to generate a drive signal with at least two levels for the gate of the field effect transistor. The drive signal with at least two levels may be generated using a deep reinforcement learning agent and data associated with one or more parameters of the field effect transistor. Implementations of a system configured for operation of a field effect transistor may include one, all, or any of the following: The gate driver may be configured to be coupled with a telecommunication network and the telecommunication network may be configured to be operatively coupled with a cloud computing system which may include the deep reinforcement learning agent. The cloud computing system may use the deep reinforcement learning agent and the data associated with the one or more parameters of the field effect transistor to train the deep reinforcement learning agent and then transmit the deep reinforcement learning agent across the telecommunication network to the gate driver for storing in the memory. The deep reinforcement learning agent may be included in the memory and the gate driver may be configured to use the deep reinforcement learning agent and the data associated with one or more parameters of the field effect transistor to generate the drive signal. The deep reinforcement learning agent may be included in the memory and may be configured to communicate with a cloud computing system over a telecommunication network coupled with the gate driver where the cloud computing system may be configured to use the data associated with one or more parameters of the field effect transistor to train the deep reinforcement learning agent to generate the drive signal with at least two levels associated with an operating area for the field effect transistor. The deep reinforcement learning agent may be one of a deep Q-network, a double deep Q-network, an Actor-Critic agent, a policy gradient agent, a Monte Carlo tree search agent, an imitation learning agent, or any combination thereof. The deep reinforcement learning agent may be trained using a deep neural network and a Markov decision process. Implementations of a system configured for operation of a field effect transistor may include a first gate driver and a second gate driver, the first gate driver and the second gate driver each coupled with a memory and a corresponding plurality of analog to digital converters. The first gate driver may be configured to be coupled with a gate of a first field effect transistor and the second gate driver configured to be coupled with a gate of a second field effect transistor where the first gate driver may be configured to generate a first drive signal with at least two levels for the gate of the first field effect transistor. The first drive signal with at least two levels may be generated using a first deep reinforcement learning agent and data associated with one or more parameters of the first field effect transistor. The second gate driver may be configured to generate a second drive signal with at least two levels for the gate of the second field effect transistor where the second drive signal with at least two levels generated using a second deep reinforcement learning agent and data associated with one or more parameters of the second field effect transistor. Implementations of a system configured for operation of a field effect transistor may include one, all, or any of the following: The first deep reinforcement learning agent and the second deep reinforcement learning agent may be the same deep reinforcement learning agent. Each of the first gate driver and the second gate driver may be configured to be coupled with a telecommunication