Search

KR-20260063573-A - METHOD AND APPARATUS FOR FILTERING HARMFUL CONTENT

KR20260063573AKR 20260063573 AKR20260063573 AKR 20260063573AKR-20260063573-A

Abstract

The present invention relates to a harmful content filtering method and apparatus capable of detecting harmful content based on zero-shot learning. A harmful content filtering method according to an embodiment of the present invention may include the steps of: receiving video content; generating a caption from the video content; and determining whether the video content is harmful by using the generated caption as input to a language model.

Inventors

  • 모정훈
  • 박지명
  • 전재현
  • 이주연
  • 최재서

Assignees

  • 연세대학교 산학협력단
  • 고려대학교 산학협력단

Dates

Publication Date
20260507
Application Date
20241030

Claims (10)

  1. Step of receiving video content; A step of generating a caption from the above video content; and A harmful content filtering method characterized by including a step of determining whether the video content is harmful by using the generated caption as input to a language model.
  2. In paragraph 1, The step of generating the above caption A harmful content filtering method characterized by including the step of generating the caption based on the visual information of the video content.
  3. In paragraph 1, The step of generating the above caption A harmful content filtering method characterized by including the step of using the above video content as input to a VLM (Visual Language Mode) to output the above caption.
  4. In paragraph 3, A harmful content filtering method characterized by the above language model including an LLM (Large Language Model).
  5. In paragraph 4, The step of determining the above-mentioned harmfulness A harmful content filtering method characterized by further including a step of using a prompt as an additional input to the above language model.
  6. Content receiving unit for receiving video content; A caption generation unit that generates a caption from the above video content; and A harmful content filtering device characterized by including a harmfulness determination unit that determines whether the video content is harmful by using the generated caption as input to a language model.
  7. In paragraph 6, The above caption generation unit A harmful content filtering device characterized by generating the caption based on the visual information of the video content.
  8. In paragraph 6, The above caption generation unit A harmful content filtering device characterized by using the above video content as input to a VLM (Visual Language Mode) to output the above caption.
  9. In paragraph 8, A harmful content filtering device characterized by the above language model including an LLM (Large Language Model).
  10. In Paragraph 9, The above-mentioned hazard assessment unit A harmful content filtering device characterized by using the above-mentioned language model as an additional prompt input.

Description

Method and apparatus for filtering harmful content The present invention relates to a harmful content filtering method and apparatus, and more specifically, to a harmful content filtering method and apparatus capable of detecting harmful content based on zero-shot learning. With the proliferation of smart devices, the age at which users begin using them is gradually decreasing. While smart devices offer various conveniences in daily life, they also serve as a medium that allows adolescents to access harmful media very easily. Digital sexual crimes, such as uploading photos of one's body or photos of others' bodies to social network services (SNS) like Twitter and Facebook, are on the rise. In particular, as it has been found that 29 percent of digital sexual crimes are committed by adolescents, practical solutions are needed to prevent adolescents from accessing or uploading harmful content through smart devices. Meanwhile, according to the 2018 survey report on youth media use and harmful environments, the usage rate of adult videos and publications among elementary school students has been steadily increasing every year, but they responded that education on preventing adult media use is not helpful as a solution to this problem. Although the Korea Communications Commission provides a service to block illegal and harmful information sites to prevent access to harmful content, its effectiveness is limited as it can be easily bypassed. Furthermore, conventional harmful site blocking services have limitations in preventing the distribution and access to harmful content because they cannot block sites that are not pre-registered. FIG. 1 is a block diagram showing a configuration classified according to operation in a harmful content filtering device related to an embodiment of the present invention. FIG. 2 is a flowchart illustrating a harmful content filtering method related to an embodiment of the present invention. Figure 3 is a diagram illustrating steps S210 and S220 shown in Figure 2. Figure 4 is a diagram illustrating step S230 shown in Figure 2. FIG. 5 is a diagram illustrating a computing environment including a computing device related to an embodiment of the present invention. Hereinafter, a harmful content filtering method and apparatus related to an embodiment of the present invention will be described with reference to the drawings. As used in this specification, singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as “composed” or “comprising” should not be interpreted as necessarily including all of the various components or steps described in the specification, and should be interpreted as meaning that some of the components or steps may not be included, or that additional components or steps may be included. The embodiments and the terms used therein are not intended to limit the technology described in this document to specific embodiments and should be understood to include various modifications, equivalents, and/or substitutions of said embodiments. In describing various embodiments below, if it is determined that a detailed description of related known functions or configurations could unnecessarily obscure the essence of the invention, such detailed description will be omitted. In relation to the description of the drawings, similar reference numerals may be used for similar components. A singular expression may include a plural expression unless the context clearly indicates otherwise. In this document, expressions such as "A or B" or "at least one of A and/or B" may include all possible combinations of the items listed together. Where it is stated that a certain (e.g., first) component is "(functionally or telecommunicationally) connected" or "connected" to another (e.g., second) component, the certain component may be directly connected to the other component or connected through another component (e.g., third component). As used in this specification, singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as “composed” or “comprising” should not be interpreted as necessarily including all of the various components or steps described in the specification, and should be interpreted as meaning that some of the components or steps may not be included, or that additional components or steps may be included. Hereinafter, zero-shot learning refers to the ability of a model to perform a specific task without being provided with separate training data. FIG. 1 is a block diagram showing a configuration classified according to operation in a harmful content filtering device related to an embodiment of the present invention. As described, the harmful content filtering device (100) may include a content receiving unit (110), a caption generating unit (120), and a harmfulness determination unit (130). The harmful content filtering device (100) may