Search

US-12621408-B2 - Image processing method and apparatus, and electronic device, and computer readable medium

US12621408B2US 12621408 B2US12621408 B2US 12621408B2US-12621408-B2

Abstract

A method for image processing is provided. The method includes: acquiring a first image from a first terminal and a second image from a second terminal, wherein the first image comprises a first portrait, and the second image comprises a second portrait; performing image matting on the second image to obtain the second portrait; and placing the second portrait in the first image through an augmented reality AR technology to obtain a third image, where the third image is displayed on the first terminal.

Inventors

  • He GUO

Assignees

  • BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.

Dates

Publication Date
20260505
Application Date
20210917
Priority Date
20200930

Claims (18)

  1. 1 . A method for image processing, comprising: acquiring a first image from a first terminal and a second image from a second terminal, wherein the first image comprises a first portrait, and the second image comprises a second portrait; performing image matting on the second image to obtain the second portrait; acquiring a first distance from the second portrait to a time of flight TOF camera of the second terminal, and a height of the second portrait; determining, in a case that the first distance is not 0, a scaling ratio of the second portrait in a first three-dimensional space, based on the first distance, the height, and an acquired lens angle of a camera of the first terminal; in a case that the first distance is 0, acquiring a second distance from a position of the second portrait in the first three-dimensional space to the camera of the first terminal, and determining a scaling ratio of the second portrait in the first three-dimensional space based on the height, the second distance and the lens angle; and placing the second portrait in the first image through an augmented reality AR technology to obtain a third image, wherein the third image is displayed on the first terminal, wherein placing the second portrait in the first image through the augmented reality AR technology comprises: placing the second portrait in the first three-dimensional space constructed based on the first image through the augmented reality AR technology, and wherein placing the second portrait in the first three-dimensional space through the augmented reality AR technology comprises: placing the second portrait in the first three-dimensional space at the scaling ratio through the augmented reality AR technology.
  2. 2 . The method according to claim 1 , wherein after acquiring the first image from the first terminal and the second image from the second terminal, the method further comprises: constructing the first three-dimensional space based on the first image.
  3. 3 . The method according to claim 2 , wherein constructing the first three-dimensional space based on the first image comprises: processing the first image by using a simultaneous localization and mapping SLAM algorithm, to construct the first three-dimensional space.
  4. 4 . The method according to claim 2 , wherein before placing the second portrait in the first three-dimensional space through the augmented reality AR technology, the method further comprises: detecting whether there is an office facility in the first three-dimensional space, and detecting a posture of the second portrait; and determining the position of the second portrait when the second portrait is placed in the first three-dimensional space, based on a position of a detected office facility and the posture of the second portrait, wherein placing the second portrait in the first three-dimensional space through the augmented reality AR technology comprises: placing the second portrait at the position in the first three-dimensional space through the augmented reality AR technology.
  5. 5 . The method according to claim 2 , wherein before placing the second portrait in the first three-dimensional space through the augmented reality AR technology, the method further comprises: processing the second portrait by using a holographic projection algorithm, to obtain a 3D image of the second portrait, wherein placing the second portrait in the first three-dimensional space through the augmented reality AR technology comprises: placing the 3D image of the second portrait in the first three-dimensional space through the augmented reality AR technology.
  6. 6 . The method according to claim 1 , wherein in a case that the second image is captured through the TOF camera of the second terminal, acquiring the height of the second portrait comprises: determining the height based on the first distance and an acquired lens angle of the TOF camera of the second terminal.
  7. 7 . The method according to claim 1 , wherein in a case that the second image is captured through a camera other than the TOF camera of the second terminal, the first distance is 0 and the height is equal to a preset value.
  8. 8 . The method according to claim 1 , further comprising: performing image matting on the first image to obtain the first portrait; and placing the first portrait in the second image through the augmented reality AR technology, to obtain a fourth image, wherein the fourth image is displayed on the second terminal.
  9. 9 . The method according to claim 8 , wherein after acquiring the first image from the first terminal and the second image from the second terminal, the method further comprises: constructing a second three-dimensional space based on the second image; and placing the first portrait in the second image through the augmented reality AR technology comprises: placing the first portrait in the second three-dimensional space through the augmented reality AR technology.
  10. 10 . A apparatus for image processing, comprising: at least one processor; and at least one memory communicatively coupled to the at least one processor and storing instructions that upon execution by the at least one processor cause the apparatus to: acquire a first image from a first terminal and a second image from a second terminal, wherein the first image comprises a first portrait, and the second image comprises a second portrait; perform image matting on the second image to obtain the second portrait; acquire a first distance from the second portrait to a time of flight TOF camera of the second terminal, and a height of the second portrait; determine, in a case that the first distance is not 0, a scaling ratio of the second portrait in a first three-dimensional space, based on the first distance, the height, and an acquired lens angle of a camera of the first terminal; in a case that the first distance is 0, acquire a second distance from a position of the second portrait in the first three-dimensional space to the camera of the first terminal, and determine a scaling ratio of the second portrait in the first three-dimensional space based on the height, the second distance and the lens angle; place the second portrait in the first image through an augmented reality AR technology to obtain a third image, wherein the third image is displayed on the first terminal; place the second portrait in the first three-dimensional space constructed based on the first image through the augmented reality AR technology; and place the second portrait in the first three-dimensional space at the scaling ratio through the augmented reality AR technology.
  11. 11 . The apparatus of claim 10 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: construct the first three-dimensional space based on the first image.
  12. 12 . The apparatus of claim 11 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: process the first image by using a simultaneous localization and mapping SLAM algorithm, to construct the first three-dimensional space.
  13. 13 . The apparatus of claim 11 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: detect whether there is an office facility in the first three-dimensional space, and detecting a posture of the second portrait; and determine the position of the second portrait when the second portrait is placed in the first three-dimensional space, based on a position of a detected office facility and the posture of the second portrait, wherein placing the second portrait in the first three-dimensional space through the augmented reality AR technology comprises: place the second portrait at the position in the first three-dimensional space through the augmented reality AR technology.
  14. 14 . The apparatus of claim 11 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: process the second portrait by using a holographic projection algorithm, to obtain a 3D image of the second portrait, wherein placing the second portrait in the first three-dimensional space through the augmented reality AR technology comprises: place the 3D image of the second portrait in the first three-dimensional space through the augmented reality AR technology.
  15. 15 . The apparatus of claim 10 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: determine the height based on the first distance and an acquired lens angle of the TOF camera of the second terminal.
  16. 16 . The apparatus of claim 10 , wherein in a case that the second image is captured through a camera other than the TOF camera of the second terminal, the first distance is 0 and the height is equal to a preset value.
  17. 17 . The apparatus of claim 10 , the at least one memory further storing instructions that upon execution by the at least one processor cause the apparatus to: perform image matting on the first image to obtain the first portrait; and place the first portrait in the second image through the augmented reality AR technology, to obtain a fourth image, wherein the fourth image is displayed on the second terminal.
  18. 18 . A computer-readable non-transitory medium, bearing computer-readable instructions that upon execution on a computing device cause the computing device at least to: acquire a first image from a first terminal and a second image from a second terminal, wherein the first image comprises a first portrait, and the second image comprises a second portrait; perform image matting on the second image to obtain the second portrait; acquire a first distance from the second portrait to a time of flight TOF camera of the second terminal, and a height of the second portrait; determine, in a case that the first distance is not 0, a scaling ratio of the second portrait in the first three-dimensional space, based on the first distance, the height, and an acquired lens angle of a camera of the first terminal; in a case that the first distance is 0, acquire a second distance from a position of the second portrait in the first three-dimensional space to the camera of the first terminal, and determine a scaling ratio of the second portrait in the first three-dimensional space based on the height, the second distance and the lens angle; place the second portrait in the first image through an augmented reality AR technology to obtain a third image, wherein the third image is displayed on the first terminal; place the second portrait in the first three-dimensional space constructed based on the first image through the augmented reality AR technology; and place the second portrait in the first three-dimensional space at the scaling ratio through the augmented reality AR technology.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a National phase application of PCT international patent application PCT/CN2021/118951, filed on Sep. 17, 2021, which claims priority to Chinese Patent Application No. 202011065674.1, titled “IMAGE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM”, filed on Sep. 30, 2020 with the China National Intellectual Property Administration, both of which are incorporated herein by reference in their entireties. FIELD The present disclosure relates to the field of image processing, and in particular to a method and a apparatus for image processing, an electronic apparatus, and a computer-readable medium. BACKGROUND With the rapid development of Internet technology, remote interaction based on network technologies is widely used in more and more fields. Conventional methods for remote interaction include voice remote interaction and traditional video remote interaction. The voice remote interaction can only realize communication of voices without image transmission, which is not visual or intuitive. Compared with the voice remote interaction, the traditional video remote interaction enables real-time transmission of images, achieving a great breakthrough in terms of visibility. However, the traditional video remote interaction is merely a simple video call, which cannot meet a user demand for a realistic face-to-face interaction, resulting in poor user experience. SUMMARY The summary is provided to introduce concepts in a simplified form, which are described in detail in the following detailed description. The summary is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to limit the scope of the claimed technical solution. In a first aspect of the present disclosure, a method for image processing is provided. The method includes: acquiring a first image from a first terminal and a second image from a second terminal, where the first image includes a first portrait, and the second image includes a second portrait; performing image matting on the second image to obtain the second portrait; and placing the second portrait in the first image through an augmented reality AR technology to obtain a third image, where the third image is displayed on the first terminal. In a second aspect of the present disclosure, a apparatus for image processing is provided. The device includes: an acquisition module, configured to acquire a first image from a first terminal and a second image from a second terminal, where the first image includes a first portrait, and the second image includes a second portrait; a processing module, configured to perform image matting on the second image to obtain the second portrait; and a placement module, configured to place the second portrait in the first image through an augmented reality AR technology to obtain a third image, where the third image is displayed on the first terminal. In a third aspect of the present disclosure, an electronic apparatus is provided. The electronic apparatus includes: a processor; and a memory configured to store machine-readable instructions. The instructions, when executed by the processor, cause the processor to implement the method for image processing according to the first aspect of the present disclosure. In a fourth aspect of the present disclosure, a computer-readable medium is provided. The computer-readable medium stores a computer program. The computer program, when executed by a processor, implements the method for image processing according to the first aspect of the present disclosure. Beneficial effects of the technical solutions provided in the embodiments of the present disclosure include at least the following aspects. A method and a apparatus for image processing, an electronic apparatus, and a medium are provided in the present disclosure, with which a portrait obtained by image matting on an image is placed onto another image including a portrait through an augmented reality AR technology to obtain a new image, and the image is displayed on a terminal. Thereby, a user demand for a realistic face-to-face interaction is satisfied, and user experience is improved. BRIEF DESCRIPTION OF THE DRAWINGS The above and other features, advantages and aspects of various embodiments of the present disclosure will become clearer when taken in conjunction with the accompanying drawings and with reference to the following detailed description. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that the units and elements are not necessarily drawn to scale. FIG. 1 is a schematic flowchart of a method for image processing according to an exemplary embodiment of the present disclosure. FIG. 2 is a schematic flowchart of a method for image processing according to another exemplary embodiment of the prese