Search

EP-3879841-B1 - REPORT EVALUATION DEVICE AND OPERATION METHOD THEREOF

EP3879841B1EP 3879841 B1EP3879841 B1EP 3879841B1EP-3879841-B1

Inventors

  • AHN, SANG IL
  • SHIN, Beomjun
  • SEO, SEOKJUN
  • BYUN, Hyeongmin
  • CHAE, Mingyun

Dates

Publication Date
20260506
Application Date
20210311

Claims (12)

  1. A report evaluation method of an electronic device provided as report evaluation device (100), comprising: establishing a video call session between a first terminal (10) of a user with a second terminal (20) of a counterpart user; receiving, by the report evaluation device (100), a report from at least one client terminal selected from the first terminal (10) and the second terminal (20), wherein the report is generated when the user reports inappropriate content through the first terminal (10) or the counterpart user reports the inappropriate content through the second terminal (20), wherein the inappropriate content is included in the video, text or sound received during the video call session, and the report includes video information, text information, or audio information; determining, by the report evaluation device (100), a category of the received report, wherein the category is one of video, text or sound; identifying, by the report evaluation device (100), a learning model corresponding to the category; evaluating, by the report evaluation device (100), a reliability of the report through the learning model; determining a number of times the report needs to be reviewed based on the evaluated reliability; and transmitting the report to one or more external devices for reviewing the report based on the determined number of times.
  2. The report evaluation method of claim 1, further comprising: establishing a video call session between a plurality of client terminals (10, 20), wherein the report is received from at least one client terminal among the plurality of client terminals (10, 20) in receiving a report.
  3. The report evaluation method of claim 1, further comprising: evaluating the reliability of the received report according to a predetermined criterion independently of the learning model; and updating an associated model in response to the evaluation result.
  4. The report evaluation method of claim 1, wherein the report includes information about inappropriate video content, information about inappropriate text content, or information about inappropriate sound content.
  5. The report evaluation method of claim 1, wherein the learning model corresponds to one of a sound censoring algorithm, a video censoring algorithm, a text censoring algorithm, or a gesture censoring algorithm.
  6. A non-transitory computer-readable recording medium on which a program for performing the method according to claim 1 is recorded.
  7. A report evaluation device (100, 200), comprising: a report receiving part (110) configured to receive a report from at least one client terminal selected from a first terminal (10) of a user and a second terminal (20) of a counterpart user, when a video call session has been established between the first terminal (10) and the second terminal (20), wherein the report is generated when the user reports inappropriate content through the first terminal (10) or the counterpart user reports the inappropriate content through the second terminal (20), wherein the inappropriate content is included in the video, text or sound received during the video call session, and the report includes video information or audio information; a model storage part (220) configured to store at least one learning model; and a reliability evaluation part (120, 210) configured to determine a category of the received report, wherein the category is one of video, text or sound, to identify a learning model corresponding to the category among the at least one learning model, to evaluate a reliability of the report through the identified model, and to determine a number of times the report needs to be reviewed based on the evaluated reliability, wherein the reliability evaluation part (120, 210) transmits the report to one or more external devices for reviewing the report based on the determined number of times.
  8. The report evaluation device of claim 7, wherein the reliability evaluation part (120, 210) is further configured to receive the report from at least one client terminal among a plurality of client terminals that have established a video call session with each other.
  9. The report evaluation device of claim 7, wherein the reliability evaluation part (120, 210) is further configured to: evaluate the reliability of the received report according to a predetermined criterion independently of the learning model; and update an associated model in response to the evaluation result to store in the model storage part.
  10. The report evaluation device of claim 7, wherein the report includes information about inappropriate video content or information about inappropriate sound content.
  11. The report evaluation device of claim 7, wherein the at least one learning model corresponds to one of a sound censoring algorithm, a video censoring algorithm, a text censoring algorithm, or a gesture censoring algorithm.
  12. The report evaluation method of claim 1, wherein determining a category of the received report includes determining a category of the report based on any one of a type of content included in the report, a type of language corresponding to the report, or a request path for generating the report.

Description

[Technical Field] The described embodiments relate to a report evaluation device capable of preventing exposure to inappropriate content during a video call and an operation method thereof. [Description of the Related Art] With the development of communication technology and miniaturization of electronic devices, personal terminals are widely distributed to general consumers. In particular, portable personal terminals such as smartphones or smart tablets have been recently widespread. Most terminals include image capture technology. The user can take an image including various contents using the terminal. There are various types of video call services based on video calls. For example, a random video chat service is a service that connects a terminal of a user who has requested to use a random video chat service with a terminal of a user randomly selected among users who use the random video chat service. When a user makes a video call with a counterpart, the user may be exposed to inappropriate video or audio from the counterpart. When a user is exposed to inappropriate video or audio that the user does not want, the user may feel sexually ashamed or offensive. Unrelated document WO 2019 / 043379 A1 discloses a method and a system for verification scoring and automated fact checking, specifically for verifying data, information and facts contained in media content found on the Internet. US 10 440 324 B1 discloses a method used in a communication service for identifying and altering undesirable portions of communication data, such as audio data and video data, from a communication session between user devices. The communications service may monitor the communications session to alter or remove undesirable audio data, such as a dog barking, a doorbell ringing, etc., and/or video data, such as rude gestures, inappropriate facial expressions. WO 2019/043379 A1 discloses a method and system for verification scoring and automated fact-checking of media content found on the internet (e.g., news articles, social media posts). It uses a combination of automated analysis and a human fact-checker network to generate a reliability or "truth" score for a piece of content. [Disclosure of the Invention] [Technical Goals] In view of the above, a report evaluation method of an electronic device is provided according to claim 1. Furthermore, a report evaluation device according to claim 7 is provided. According to the described example embodiments, a report evaluation device capable of preventing a user making a video call with a counterpart from being exposed to inappropriate video or audio from the counterpart and an operation method thereof may be provided. In addition, a report evaluation device and an operation method thereof capable of preventing sexual shame or displeasure that a user making a video call with a counterpart may feel by a video from the counterpart may be provided. Moreover, a terminal capable of inducing a sound video call between users and a method of operating the same may be provided. [Technical Solutions] According to an aspect, there is provided a report evaluation method including receiving a report from at least one client terminal, determining a category of the received report, identifying a learning model corresponding to the category, evaluating a reliability of the report through the learning model, and generating and outputting information on the reliability. The report includes video information, text information, or audio information. Alternatively, the report evaluation method further includes establishing a video call session between a plurality of client terminals, and the report may be received from at least one client terminal among the plurality of client terminals in receiving a report. Alternatively, the report evaluation method further includes evaluating the reliability of the received report according to a predetermined criterion independently of the learning model and updating an associated learning model in response to the evaluation result. Alternatively, the report may include information about inappropriate video content, information about inappropriate text content, or information about inappropriate sound content. Alternatively, the learning model may correspond to one of a sound censoring algorithm, a video censoring algorithm, a text censoring algorithm, or a gesture censoring algorithm. According to another aspect, there is provided a report evaluation device including a report receiving part configured to receive a report from at least one client terminal, a learning model storage part configured to store at least one learning model, and a reliability evaluation part configured to determine a category of the received report, to identify a learning model corresponding to the category among the at least one learning model, and to evaluate a reliability of the report through the learning model. The reliability evaluation part may generate and output information on the