US-12626726-B2 - Editing device, image processing device, terminal device, editing method, image processing method, and program
Abstract
An editing device includes a first processor. The first processor is configured to display a first image on a first screen and a first video, as a background image of the first image, on the first screen, and edit an inside of the first screen in response to a provided instruction in a state where the first image and the first video are displayed on the first screen.
Inventors
- Hiroyuki Mizukami
Assignees
- FUJIFILM CORPORATION
Dates
- Publication Date
- 20260512
- Application Date
- 20231024
- Priority Date
- 20221026
Claims (20)
- 1 . An image processing device comprising: a first processor, wherein the first processor is configured to: display a first image on a first screen and a first video, as a background image of the first image, on the first screen; edit an inside of the first screen in response to a provided instruction in a state where the first image and the first video are displayed on the first screen; save an editing content obtained by editing the inside of the first screen; acquire a second image that enables a save destination of the editing content to be specified; assign the acquired second image to the first image to generate a third image; and output the generated third image from an editing device; and a second processor, wherein the second processor is configured to: capture a printed matter obtained by printing the third image output from the editing device to acquire a fourth image showing the printed matter; detect the second image from the fourth image; acquire the editing content from the save destination based on the second image; perform first display processing of displaying the fourth image and the editing content on a second screen; store history information in which the second image is associated with the editing content in a memory; acquire, in a case where the second image is detected from the fourth image in a state where the history information is stored in the memory, the editing content corresponding to the detected second image from the history information; and perform second display processing of displaying, on the second screen, the fourth image including the second image corresponding to the editing content acquired from the history information and the editing content acquired from the history information.
- 2 . The image processing device according to claim 1 , wherein the storing of the history information and the acquisition of the editing content from the history information are realized offline.
- 3 . The image processing device according to claim 1 , wherein the editing device adjusts, in a state where the first video is displayed on the first screen as the background image of the first image, image quality of the first video to edit the inside of the first screen, and the second display processing includes first processing of displaying a second video including the fourth image on the second screen, and second processing of applying the image quality adjusted by the editing device to a background image of the fourth image in the second video.
- 4 . The image processing device according to claim 3 , wherein the second video is a live view image.
- 5 . The image processing device according to claim 3 , wherein the adjustment of the image quality is realized by using a first filter, and the second processing is realized by applying a second filter corresponding to the first filter to the background image of the fourth image in the second video.
- 6 . The image processing device according to claim 5 , wherein the second processor is configured to: cause an application position and application size of the second filter to follow a display position and/or display size of the background image of the fourth image in the second video.
- 7 . The image processing device according to claim 1 , wherein the second processor is configured to: display the fourth image on the second screen in a live view mode; and display, in the second screen, the editing content to follow the fourth image.
- 8 . The image processing device according to claim 7 , wherein the second processor is configured to: cause a display position and/or display size of the editing content in the second screen to follow a change in a display position and/or display size of the fourth image in the second screen.
- 9 . The image processing device according to claim 1 , wherein the second processor is configured to: acquire an image obtained by reflecting the editing content on the fourth image as a still image for recording or a video for recording.
- 10 . The image processing device according to claim 1 , wherein the first display processing and/or the second display processing are realized by executing an application, and the second processor is configured to: perform, in a case where the application is not introduced, introduction processing of introducing the application or auxiliary processing of assisting the introduction of the application.
- 11 . An image processing device comprising: a first processor, wherein the first processor is configured to: display a first image on a first screen and a first video, as a background image of the first image, on the first screen; edit an inside of the first screen in response to a provided instruction in a state where the first image and the first video are displayed on the first screen; save an editing content obtained by editing the inside of the first screen; acquire a second image that enables a save destination of the editing content to be specified; assign the acquired second image to the first image to generate a third image; and output the generated third image from an editing device; and a second processor, wherein the second processor is configured to: capture a printed matter obtained by printing the third image output from the editing device to acquire a fourth image showing the printed matter; detect the second image from the fourth image; acquire the editing content from the save destination based on the second image; and perform third display processing of displaying the fourth image and the editing content on a second screen, the editing device adjusts image quality of the first video in a state where the first video is displayed on the first screen as the background image of the first image to edit the inside of the first screen, and the third display processing includes third processing of displaying a second video including the fourth image on the second screen, and fourth processing of applying the image quality adjusted by the editing device to a background image of the fourth image in the second video.
- 12 . The image processing device according to claim 11 , wherein the second video is a live view image.
- 13 . The image processing device according to claim 12 , wherein the adjustment of the image quality is realized by using a first filter, and the fourth processing is realized by applying a second filter corresponding to the first filter to the background image of the fourth image in the second video.
- 14 . The image processing device according to claim 13 , wherein the second processor is configured to: cause an application position and application size of the second filter to follow a display position and/or display size of the background image of the fourth image in the second video.
- 15 . The image processing device according to claim 11 , wherein the second processor is configured to: cause, in the second screen, the editing content to follow the fourth image.
- 16 . The image processing device according to claim 15 , wherein the second processor is configured to: cause a display position and/or display size of the editing content in the second screen to follow a change in a display position and/or display size of the fourth image in the second screen.
- 17 . The image processing device according to claim 11 , wherein the second processor is configured to: acquire an image obtained by reflecting the editing content on the fourth image as a still image for recording or a video for recording.
- 18 . The image processing device according to claim 11 , wherein the third display processing is realized by executing an application, and the second processor is configured to: perform, in a case where the application is not introduced, introduction processing of introducing the application or auxiliary processing of assisting the introduction of the application.
- 19 . A terminal device comprising: the image processing device according to claim 10 ; and a second communication interface that controls communication between the image processing device and an external device.
- 20 . An image processing method comprising: displaying a first image on a first screen and a first video, as a background image of the first image, on the first screen; editing an inside of the first screen in response to a provided instruction in a state where the first image and the first video are displayed on the first screen; saving an editing content obtained by editing the inside of the first screen; acquiring a second image that enables a save destination of the editing content to be specified; assigning the acquired second image to the first image to generate a third image; outputting the generated third image from an editing device; capturing a printed matter obtained by printing the third image output from the editing device to acquire a fourth image showing the printed matter; detecting the second image from the fourth image; acquiring the editing content from the save destination based on the second image; performing first display processing of displaying the fourth image and the editing content on a second screen; holding history information in which the second image is associated with the editing content; acquiring, in a case where the second image is detected from the fourth image in a state where the history information is held, the editing content corresponding to the detected second image from the history information; and performing second display processing of displaying, on the second screen, the fourth image including the second image corresponding to the editing content acquired from the history information and the editing content acquired from the history information.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority under 35 USC 119 from Japanese Patent Application No. 2022-171798 filed on Oct. 26, 2022, the disclosure of which is incorporated by reference herein. BACKGROUND 1. Technical Field The technique of the present disclosure relates to an editing device, an image processing device, a terminal device, an editing method, an image processing method, and a program. 2. Related Art JP6260840B discloses a content distribution service method using a printed matter. The content distribution service method described in JP6260840B includes first to ninth steps. In the first step, a service app is started in a terminal A, and an image is selected. In the second step, a plurality of contents are edited on the image selected on an app screen. In the third step, the terminal A transmits editing information of the image and the content to a server side. In the fourth step, the server saves the editing information of the image and the content, and transmits saved information to a terminal A side. In the fifth step, the terminal A generates a predetermined recognition code in which the saved information is recorded and embeds the generated recognition code in the image. In the sixth step, the terminal A transmits the image in which the recognition code is embedded to a printing device side to print the image. In the seventh step, the service app is started in a terminal B, the recognition code is recognized in the printed matter distributed offline, and the saved information is detected. In the eighth step, the terminal B accesses the server based on the detected saved information, and then the editing information of the image and the content is transmitted from the server. In the ninth step, the image and the edited content are expressed or started by the editing information transmitted from the terminal B. Further, in the ninth step, the content is expressed or started in a superimposed manner on the image in a time series. JP2017-011461A discloses an image providing system comprising an image composition device, a server, and a mobile terminal owned by a player. In the image providing system described in JP2017-011461A, the image composition device captures a captured image including a still image and a video of the player by imaging means, displays the obtained still image on a display screen of display means, composites an edited image such as a character or a graphic that is input by input means with the still image by editing means to form a composite image, and prints the formed composite image on a sticker mount by printing means. In the image providing system described in JP2017-011461A, the server uploads in a downloadable manner and saves the captured image and the composite image in association with specific information that specifies the captured image including the still image and the video and the composite image by the editing means. In the image providing system described in JP2017-011461A, the mobile terminal has a function of transmitting, to the server, the specific information that specifies the desired captured image and composite image to download the captured image and the composite image. In the image providing system described in JP2017-011461A, the image composition device comprises marker generation means and transmission means. The marker generation means generates an augmented reality marker composed of a graphic code of one-dimensional code or two-dimensional code representing display information that specifies an orientation, position, size, or the like of an image for compositing and displaying the captured image by the imaging means and the composite image by the editing means on a real image by an augmented reality technique and causes the printing means to print the generated marker on the sticker mount. The transmission means transmits, to the server, the captured image by the imaging means and the composite image by the editing means for saving. In the image providing system described in JP2017-011461A, the mobile terminal comprises a camera, reading means, communication means, and composition means. The reading means reads the augmented reality marker printed on the sticker mount. The communication means transmits, to the server, the specific information corresponding to the captured image and the composite image to be downloaded to specify the captured image and the composite image, and calls and receives the specified captured image and composite image from the server. The composition means composites, based on the display information represented by the augmented reality marker read by the reading means, at least one of the still image, video, or composite image of the player, which are captured images received by the communication means, with the image captured by the camera and displays the composite image on a screen of a monitor. In the image providing system described in JP2017-011461A, the marker generation me