US-12621388-B2 - Device, system and method for providing machine learning prompts on a call at a contact center server
Abstract
A device, system and method for providing machine learning prompts on a call at a contact center server are provided. A contact center (CC) server receives a call. The CC server receives, on the call, an indication of a queue, of a plurality of queues maintained by the CC server, into which to place the call. The CC server places the call into the queue indicated by the indication, the call placed into the queue in a hold state. A machine learning engine generates, based on historical data associated with the queue, one or more prompts for the call. The CC server provides the one or more prompts on the call during the hold state.
Inventors
- Jonathan Braganza
- Logendra Naidoo
Assignees
- MITEL NETWORKS CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20230913
Claims (18)
- 1 . A method comprising: receiving, via a contact center (CC) server, a call; receiving, via the CC server, on the call, an indication of a queue, of a plurality of queues maintained by the CC server, into which to place the call; placing, via the CC server, the call into the queue indicated by the indication, the call placed into the queue in a hold state; generating, via a machine learning engine, based on historical data associated with the queue, one or more prompts for the call; and providing, via the CC server, the one or more prompts on the call during the hold state, wherein generating one or more prompts for the call comprises: generating, via the machine learning engine, an initial prompt, of the one or more prompts, that includes one or more estimated reasons for the call based on the historical data associated with the queue; receiving, on the call, a selection of an estimated reason, of the one or more estimated reasons; and generating, via the machine learning engine, the one or more prompts that follow the initial prompt, based on the selection of the estimated reason.
- 2 . The method of claim 1 , wherein the machine learning engine comprises a generative artificial intelligence engine.
- 3 . The method of claim 1 , wherein the historical data comprises one or more of: historical caller data associated with previous calls associated with a category of the queue; respective historical data associated with a caller on the call; news data associated with the category of the queue; and social media data associated with the category of the queue.
- 4 . The method of claim 1 , wherein the historical data associated with the queue comprises caller data of a given number of previous calls that preceded the call.
- 5 . The method of claim 1 , wherein the one or more prompts for the call are further based on respective historical data associated with a caller on the call.
- 6 . The method of claim 1 , further comprising: generating, via the machine learning engine, a final prompt, of the one or more prompts, that includes one or more of: an indication that the call is to be transferred to a human-operated terminal; and an estimated time until a transfer to the human-operated terminal; and a request for input from a calling device that made the call to indicate whether the call was successful or unsuccessful.
- 7 . The method of claim 1 , further comprising: identifying, via the machine learning engine, an event having an aggregate effect on the queue, wherein the historical data associated with the queue comprises historical caller data of a given number of previous calls, including other calls related to the event, that preceded the call; analyzing, via the machine learning engine, the historical caller data related to the event to determine patterns or trends influencing call volumes or caller behavior resulting from the event; generating, via the machine learning engine, the one or more prompts specifically tailored to address an impact of the event on the queue and provide relevant information or assistance to callers affected by the event; and providing, via the CC server, the one or more prompts generated to mitigate the impact of the event on the queue.
- 8 . The method of claim 1 , further comprising: determining, based on one or more of a transcript of the call, a length of the call, a respective indication received from a calling device that made the call, whether the call is transferred to a human-operated terminal, whether the call is successful or unsuccessful; and when the call is successful, training the machine learning engine using the transcript of the call as a positive training set.
- 9 . The method of claim 1 , further comprising: determining, based on one or more of a transcript of the call, a length of the call, a respective indication received from a calling device that made the call, whether the call is transferred to a human-operated terminal, whether the call is successful or unsuccessful; and when the call is unsuccessful, training the machine learning engine using the transcript of the call as a negative training set.
- 10 . A computing device comprising: a controller; and a computer- readable storage medium having stored thereon program instructions that, when executed by the controller, causes the controller to perform a set of operations comprising: receiving, via a contact center (CC) server, a call; receiving, via the CC server, on the call, an indication of a queue, of a plurality of queues maintained by the CC server, into which to place the call; placing, via the CC server, the call into the queue indicated by the indication, the call placed into the queue in a hold state; generating, via a machine learning engine, based on historical data associated with the queue, one or more prompts for the call; and providing, via the CC server, the one or more prompts on the call during the hold state, wherein the set of operations further comprise: generating, via the machine learning engine, an initial prompt, of the one or more prompts, that includes one or more estimated reasons for the call; receiving, on the call, a selection of an estimated reason, of the one or more estimated reasons; and generating, via the machine learning engine, the one or more prompts that follow the initial prompt, based on the selection of the estimated reason.
- 11 . The computing device of claim 10 , wherein the machine learning engine comprises a generative artificial intelligence engine.
- 12 . The computing device of claim 10 , wherein the historical data comprises one or more of: historical caller data associated with previous calls associated with a category of the queue; respective historical data associated with a caller on the call; news data associated with the category of the queue; and social media data associated with the category of the queue.
- 13 . The computing device of claim 10 , wherein the historical data associated with the queue comprises caller data of a given number of previous calls that preceded the call.
- 14 . The computing device of claim 10 , wherein the one or more prompts for the call are further based on respective historical data associated with a caller on the call.
- 15 . The computing device of claim 10 , wherein the set of operations further comprise: generating, via the machine learning engine, a final prompt, of the one or more prompts, that includes one or more of: an indication that the call is to be transferred to a human-operated terminal; and an estimated time until a transfer to the human-operated terminal; and a request for input from a calling device that made the call to indicate whether the call was successful or unsuccessful.
- 16 . The computing device of claim 10 , wherein the set of operations further comprise: identifying, via the machine learning engine, an event having an aggregate effect on the queue, wherein the historical data associated with the queue comprises historical caller data of a given number of previous calls, including other calls related to the event, that preceded the call; analyzing, via the machine learning engine, the historical caller data related to the event to determine patterns or trends influencing call volumes or caller behavior resulting from the event; generating, via the machine learning engine, the one or more prompts specifically tailored to address an impact of the event on the queue and provide relevant information or assistance to callers affected by the event; and providing, via the CC server, the one or more prompts generated to mitigate the impact of the event on the queue.
- 17 . The computing device of claim 10 , wherein the set of operations further comprise: determining, based on one or more of a transcript of the call, a length of the call, a respective indication received from a calling device that made the call, whether the call is transferred to a human-operated terminal, whether the call is successful or unsuccessful; and when the call is successful, training the machine learning engine using the transcript of the call as a positive training set.
- 18 . The computing device of claim 10 , wherein the set of operations further comprise: determining, based on one or more of a transcript of the call, a length of the call, a respective indication received from a calling device that made the call, whether the call is transferred to a human-operated terminal, whether the call is successful or unsuccessful; and when the call is unsuccessful, training the machine learning engine using the transcript of the call as a negative training set.
Description
FIELD The present specification generally relates to server devices, and machine learning-based methods therefor. More particularly, exemplary embodiments of the specification relate to a device, system and method for providing machine learning prompts on a call at a contact center server. BACKGROUND At contact center (CC) servers, for example at customer service centers, calls are placed into a hold state and into a queue and eventually answered by an agent using a terminal. While in the queue, hold music, and the like, may be played on the call. In some examples, callers may be informed of the position of the call in the queue and/or estimated wait times may be repeatedly provided on the call. However, queues use processing and bandwidth resources, as does music, call position processing and estimated wait time processing. Any discussion of problems provided in this section has been included in this disclosure solely for the purposes of providing a context for the present invention, and should not be taken as an admission that any or all of the discussion was known at the time the invention was made. BRIEF DESCRIPTION OF THE DRAWING FIGURES Subject matter of the present specification is particularly pointed out and distinctly claimed in the concluding portion of the specification. A more complete understanding of the present specification, however, may best be obtained by referring to the detailed description and claims when considered in connection with the drawing figures. FIG. 1 illustrates a system in accordance with exemplary embodiments of the specification. FIG. 2 illustrates an exemplary computing device and/or engine for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 3 illustrates a method for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 4 depicts the system of FIG. 1 implementing aspects of a method for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 5 depicts the system of FIG. 1 implementing further aspects of a method for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 6 depicts the system of FIG. 1 implementing further aspects of a method for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 7 depicts the system of FIG. 1 implementing further aspects of a method for providing machine learning prompts on a call at a contact center server, in accordance with exemplary embodiments of the specification. FIG. 8 depicts the system of FIG. 1 implementing a machine learning engine in a training mode, in accordance with exemplary embodiments of the specification. It will be appreciated that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of illustrated embodiments of the present specification. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The description of various embodiments of the present specification provided below is merely exemplary and is intended for purposes of illustration only; the following description is not intended to limit the scope of the specification disclosed herein. Moreover, recitation of multiple embodiments having stated features is not intended to exclude other embodiments having additional features or other embodiments incorporating different combinations of the stated features. The specification describes exemplary devices, systems, and methods. As set forth in more detail below, exemplary devices, systems, and methods described herein may be conveniently used in customer service centers. However, the specification is not limited to such applications. As used herein, the term “engine” refers to hardware (e.g., a processor, such as a central processing unit (CPU), graphics processing unit (GPU), an integrated circuit or other circuitry) or a combination of hardware and software (e.g., programming such as machine-or processor-executable instructions, commands, or code such as firmware, a device driver, programming, object code, etc. as stored on hardware). Hardware includes a hardware element with no software elements such as an application specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), a PAL (programmable array logic), a PLA (programmable logic array), a PLD (programmable logic device), etc. A combination of hardware and software includes software hosted at hardware (e.g., a software module that is stored at a processor-readable memory such as random access memory (RAM), a hard-di