KR-20260066468-A - System for selecting multiple response strategies to user-adaptive unethical utterance
Abstract
A method for selecting multiple response strategies for user-adaptive unethical utterances includes: a step in which a system acquires user utterance input in text form; a step in which the system detects unethical utterances within the user utterances; a step in which, if unethical utterances are detected, the system detects the utterance intent and the degree of harmfulness of the utterances targeting the unethical utterances; and a step in which the system selects a response strategy based on the detection results of the utterance intent and the degree of harmfulness. By doing so, a user-customized response service can be provided for the user's unethical utterances.
Inventors
- 김산
- 신사임
- 장진예
- 조병길
Assignees
- 한국전자기술연구원
Dates
- Publication Date
- 20260512
- Application Date
- 20241104
Claims (12)
- A step in which the system acquires user utterances input in the form of text; A step in which the system detects unethical utterances within user utterances; When unethical speech is detected, the system detects the speech intent and the degree of harmfulness of the speech with respect to the unethical speech; and A method for selecting multiple response strategies for user-adaptive unethical speech, comprising the step of the system selecting a response strategy based on the detection result of the speech intent and the detection result of the harmfulness level.
- In claim 1, The step of detecting the ignition intent and the degree of harm is, The utterance intention of unethical speech is classified into one of a) opinion seeking type, b) opinion expression type, c) information provision type, and d) information provision request type, and a) The opinion search type is, It is a type that has the intention of asking for the system's opinion, and b) Types of opinion expression are, It is a type that has a speech intention to express the user's opinion, and c) The type of information provided is, It is a type that has the intention of providing information known to the user, and d) Types of information provision requests are, A method for selecting multiple response strategies for user-adaptive unethical speech, characterized by being a type having a speech intent to request information provision from the system.
- In claim 2, The step of detecting the ignition intent and the degree of harm is, A user-adaptive method for selecting multiple response strategies for unethical speech, characterized by numerically calculating the degree of harmfulness of unethical speech within a preset range.
- In claim 3, The step of selecting a response strategy is, Select one of the following: 1) Speech evaluation strategy, 2) Limit setting strategy, 3) Terminology explanation strategy, 4) Pacifying questioning strategy, 5) Repetitive softening strategy, 6) Rebuttal strategy, 7) Conditional approval strategy, 8) Neutral stance strategy, 9) Evidence presentation strategy, 10) Information presentation strategy, 11) Education/guidance strategy, 12) Social conduct response strategy, 13) Topic changing strategy, and 14) Humor response strategy, and 1) The strategy for evaluating remarks is, The system is a strategy for evaluating the inappropriateness of a user's unethical speech, and 2) The limit-setting strategy is, It is a strategy to inform about the limitations of the system, and 3) The term explanation strategy is, It is a strategy in which the system explains the meaning of terms corresponding to the user's unethical utterances, and 4) The conciliatory questioning strategy is, It is a strategy in which the system questions the user about the intent behind their unethical speech, and 5) The purification repetition strategy is, It is a strategy in which the system softens the user's unethical speech and provides it to the user, and 6) Counter-argument strategies are, It is a strategy in which the system provides opinions that counter user opinions containing unethical utterances, and 7) The conditional approval strategy is, It is a strategy in which the system provides conditional approval for user opinions containing unethical utterances, and 8) The neutral position strategy is, It is a strategy in which the system provides neutral opinions on user opinions containing unethical utterances, and 9) The strategy of presenting evidence is, When the system provides an opinion refuting a user's opinion containing unethical utterance, a conditional approval opinion, or a neutral opinion, it is a strategy to present the grounds for the provided opinion, and 10) Information presentation strategies are, It is a strategy in which the system presents additional objective information that the user did not request in response to the user's unethical utterance, and 11) Education/guidance strategies are, It is a strategy in which the system provides opinions to educate/guide the inappropriateness of users' unethical speech, and 12) Strategies for responding to social speech and behavior are, It is a strategy in which the system provides feedback in a humble/gentle tone according to the social context regarding user opinions containing unethical utterances, and 14) The topic switching strategy is It is a strategy in which the system provides feedback that redirects to a different topic in response to user opinions containing unethical utterances, and 15) The humor response strategy is A user-adaptive method for selecting multiple response strategies for unethical speech, characterized by a system that uses humor to indirectly convey the risk regarding the user's unethical speech.
- In claim 1, The system, It includes a user response strategy DB that stores multiple response strategies selectable according to the speech intent and degree of harm, and The step of selecting a response strategy is, A method for selecting multiple response strategies for user-adaptive unethical speech, characterized in that the system selects one of a plurality of response strategies stored in a user response strategy DB according to the speech intent and degree of harm.
- In claim 5, A method for selecting multiple response strategies for user-adaptive unethical utterances, characterized by further including the step of the system generating a response utterance according to a selected response strategy.
- In claim 6, The system, A user-adaptive method for selecting multiple response strategies for unethical speech, characterized by storing user speech, response strategies, and response speech as the conversation history of the user.
- In claim 6, The system further includes the step of analyzing the emotion of the acquired response utterance when the user's response utterance is acquired after the system has provided the generated response utterance to the user, and The system, A user-adaptive method for selecting multiple response strategies for unethical speech, characterized by reflecting the result of the sentiment analysis of the response utterance when selecting a response strategy for unethical speech by a user, provided that a response strategy for unethical speech by the user and a response utterance generated according to the response strategy are provided in advance and a result of the sentiment analysis of the response utterance exist.
- In claim 8, The step of analyzing the emotion of the response utterance is, Classifying whether the emotional type of the corresponding utterance is positive, negative, or neutral, and The step of selecting a response strategy is, If the emotion type of the response utterance is positive, set the selection weight for the previously selected response strategy for the user's unethical utterance to twice the initial selection weight, and A user-adaptive method for selecting multiple response strategies for unethical utterances, characterized by setting the selection weight for a previously selected response strategy to 0.1 times the initial selection weight when the emotion type of the response utterance is a negative type.
- An input unit that acquires user utterances entered in text form; A user-adaptive multi-response strategy selection system for unethical speech, comprising: a processor that detects unethical speech within user speech, and when unethical speech is detected, detects speech intent and the degree of harmfulness of the speech targeting the unethical speech, and selects a response strategy based on the detection results of speech intent and the detection results of the degree of harmfulness.
- A step in which the system detects unethical utterances within user utterances input in text form; When unethical speech is detected, the system detects the speech intent and the degree of harmfulness of the speech against the unethical speech; A step in which the system selects a response strategy based on the detection result of the firing intent and the detection result of the harmfulness level; and A method for selecting multiple response strategies for user-adaptive unethical utterances, comprising the step of a system generating a response utterance according to a selected response strategy.
- An unethical utterance detector that detects unethical utterances within user utterances input in text form; Speech intent detector for detecting speech intent in unethical speech; A hazard detector that detects the degree of harm in unethical speech; A response strategy selector that selects a response strategy based on the detection results of the firing intent and the detection results of the level of harmfulness; and A user-adaptive multi-response strategy selection system for unethical utterances, comprising a response utterance generation model that generates response utterances according to a selected response strategy.
Description
System for selecting multiple response strategies to user-adaptive unethical utterance The present invention relates to a system for selecting a user response strategy in a generative AI (artificial intelligence) model, and more specifically, to a system for selecting and providing a user response strategy in response to unethical user inputs and requests (discrimination, profanity, unethical behavior, immorality, etc.) in a generative AI model. Generative AI models are AI models capable of generating new content and ideas, such as conversations, stories, images, videos, and music, and can perform new computing tasks such as image recognition, natural language processing (NLP), and translation. Conventional systems and research utilizing such generative AI models for detecting and responding to unethical utterances have limitations in that they can perform only a single response (e.g., pointing out inappropriateness or expressing regret) to unethical user utterances, and thus fail to reflect the user's tendencies or preferences regarding response strategies. Accordingly, there is a need to explore ways to select an appropriate response strategy for users' unethical speech by considering their tendencies or preferences among various response strategies. FIG. 1 is a drawing provided in the description of the configuration of a user-adaptive multiple response strategy selection system for unethical speech according to an embodiment of the present invention, FIG. 2 is a drawing provided for a more detailed configuration description of the processor illustrated in FIG. 1. FIG. 3 is a diagram illustrating a table summarizing response strategies that can be selected by considering the type of speech intent and the degree of harm (harm level) through a user-adaptive multiple response strategy selection system for unethical speech according to an embodiment of the present invention. FIG. 4 is a flowchart provided for explaining a method for selecting multiple response strategies for user-adaptive unethical speech according to an embodiment of the present invention, and FIG. 5 is a flowchart provided to explain the process of providing a response utterance to a user through a user-adaptive multi-response strategy selection system for unethical utterances according to one embodiment of the present invention, analyzing the emotion of the user's response utterance, and reflecting the result of the emotion analysis when selecting a response strategy for the user's unethical utterance that occurs thereafter. The present invention will be described in more detail below with reference to the drawings. To clearly explain the invention, parts unrelated to the description have been omitted from the drawings, and in the drawings, the width, length, thickness, etc., of the components may be exaggerated for convenience. FIG. 1 is a diagram provided in the configuration description of a user-adaptive multiple response strategy selection system for unethical speech according to one embodiment of the present invention. The user-adaptive multi-response strategy selection system for unethical speech according to the present embodiment (hereinafter collectively referred to as the "system") can select an appropriate response strategy from among various response strategies regarding the user's unethical speech in a generative artificial intelligence model by considering the user's tendencies or preferences. To this end, the system may include an input unit (100), a processor (200), and a storage unit (300). The input unit (100) is equipped with a communication module connected to a network with an input interface device that receives user input, such as a mouse or keyboard, and can acquire user speech input in the form of text. For example, the input unit (100) can acquire user utterances in text form through an input interface device that receives user input, such as a mouse or keyboard, or acquire user utterances in text form by receiving them from an external device. The storage unit (300) is provided to store programs and data necessary for the operation of the processor (200). For example, the storage unit (300) may include a user response strategy DB in which multiple response strategies that can be selected according to the speech intent and degree of harm are stored, and a conversation record DB in which conversation records including a specific user's speech, a history of response strategies selected to respond to the user's unethical speech, and response speeches are stored for each user. A processor (200) detects whether unethical content (e.g., discrimination, profanity, unethicality, immorality, etc.) is included in user utterances input in text form in a conversational generative AI model such as a large language model (LLM) (hereinafter collectively referred to as ‘unethical utterances’), and if unethical utterances are detected, selects a specific response strategy among a plurality of response strategies by considering t