US-12627880-B2 - Targeted image adjustment
Abstract
System and method to perform targeted image adjustments starts with a processor receiving a media content item and identifies an image adjustment parameter and an adjustment value based on the media content item. Processor generates an adjusted media content item using the image adjustment parameter and the adjustment value and causes an adjustment interface to be displayed by a display of a user device. The adjustment interface can comprise the adjusted media content item and a selectable item associated with the image adjustment parameter. The selectable item can include settings. In response to receiving a selection of one of the settings of the selectable item, processor generates a final media content item based on the selection of the one of the settings, and cause the final media content item to be displayed by the display of the user device. Other embodiments are described herein.
Inventors
- Qiang Gao
- Tuo Wang
- Anbang Zhao
Assignees
- SNAP INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20230920
Claims (20)
- 1 . A system comprising: a processor; and a memory having instructions stored thereon, when executed by the processor, causes the system to: receive a media content item; identify, based on an analysis of the media content item using a neural network or a machine-learning system, an image adjustment parameter and an adjustment value associated with the image adjustment parameter to be applied to the media content item, wherein the neural network or the machine-learning system is trained using a plurality of test image adjustment parameters and a plurality of test adjustment values, wherein the test image adjustment parameters comprise brightness, tone, temperature, contrast, gamma, and sharpness; generate an adjusted media content item using the image adjustment parameter and the adjustment value; cause an adjustment interface to be displayed by a display of a user device, wherein the adjustment interface comprises the adjusted media content item and a selectable item associated with the image adjustment parameter, wherein the selectable item includes a plurality of settings to adjust the adjustment value; and in response to receiving a selection of one of the settings of the selectable item, generate a final media content item based on the selection of the one of the settings, and cause the final media content item to be displayed by the display of the user device.
- 2 . The system of claim 1 , wherein the media content item is a video or image captured using a camera.
- 3 . The system of claim 1 , wherein the media content item is a pre-capture video or image that is displayed in a viewfinder.
- 4 . The system of claim 1 , wherein the image adjustment parameter is brightness, tone, temperature, contrast, gamma, or sharpness.
- 5 . The system of claim 4 , wherein the neural network or the machine-learning system is trained using a plurality of test media content items.
- 6 . The system of claim 5 , wherein the neural network or the machine-learning system is further trained using the selection of the one of the settings.
- 7 . The system of claim 1 , wherein the selectable item is a slider or a plurality of selectable buttons or icons.
- 8 . The system of claim 1 , wherein the instructions further causing the system to: identify a plurality of image adjustment parameters and a plurality of adjustment values based on the media content item; and generate the adjusted media content item using the plurality of image adjustment parameters and the plurality of adjustment values.
- 9 . The system of claim 8 , wherein the adjustment interface further comprises a plurality of selectable items associated with the plurality of image adjustment parameters, and wherein the instructions further causing the system to: generate the final media content item based on a selection of a plurality of settings of the plurality of selectable items associated with the plurality of image adjustment parameters.
- 10 . A method comprising: receiving, by a processor, a media content item; identifying, based on an analysis of the media content item using a neural network or a machine-learning system, an image adjustment parameter and an adjustment value associated with the image adjustment parameter to be applied to the media content item, wherein the neural network or the machine-learning system is trained using a plurality of test image adjustment parameters and a plurality of test adjustment values, wherein the test image adjustment parameters comprise brightness, tone, temperature, contrast, gamma, and sharpness; generating an adjusted media content item using the image adjustment parameter and adjustment value; causing an adjustment interface to be displayed by a display of a user device, wherein the adjustment interface comprises the adjusted media content item and a selectable item associated with the image adjustment parameter, wherein the selectable item includes a plurality of settings to adjust the adjustment value; and in response to receiving a selection of one of the settings of the selectable item, generating a final media content item based on the selection of the one of the settings, and causing the final media content item to be displayed by the display of the user device.
- 11 . The method of claim 10 , wherein the media content item is a video or image captured using a camera.
- 12 . The method of claim 10 , wherein the media content item is a pre-capture video or image that is displayed in a viewfinder.
- 13 . The method of claim 10 , wherein the image adjustment parameter is brightness, tone, temperature, contrast, gamma, or sharpness.
- 14 . The method of claim 13 , wherein the neural network or the machine-learning system is trained using a plurality of test media content items.
- 15 . The method of claim 14 , wherein the neural network or the machine-learning system is further trained using the selection of the one of the settings.
- 16 . The method of claim 10 , wherein the selectable item is a slider or a plurality of selectable buttons or icons.
- 17 . The method of claim 10 , further comprising: identifying a plurality of image adjustment parameters and a plurality of image adjustment values based on the media content item; and generating the adjusted media content item using the plurality of image adjustment parameters and the plurality of image adjustment values.
- 18 . The method of claim 17 , wherein the adjustment interface further comprises a plurality of selectable items associated with the plurality of image adjustment parameters, and wherein the method further comprising: generating the final media content item based on a selection of a plurality of settings of a plurality of selectable items.
- 19 . A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: receive a media content item; identify, based on an analysis of the media content item using a neural network or a machine-learning system, an image adjustment parameter and an adjustment value associated with the image adjustment parameter to be applied to the media content item, wherein the neural network or the machine-learning system is trained using a plurality of test image adjustment parameters and a plurality of test adjustment values, wherein the test image adjustment parameters comprise brightness, tone, temperature, contrast, gamma, and sharpness; generate an adjusted media content item using the image adjustment parameter and the adjustment value; cause an adjustment interface to be displayed by a display of a user device, wherein the adjustment interface comprises the adjusted media content item and a selectable item associated with the image adjustment parameter, wherein the selectable item includes a plurality of settings to adjust the adjustment value; and in response to receiving a selection of one of the settings of the selectable item, generate a final media content item based on the selection of the one of the settings, and cause the final media content item to be displayed by the display of the user device.
- 20 . The non-transitory computer-readable storage medium of claim 19 , wherein the neural network or the machine-learning system is trained using a plurality of test media content items, wherein the image adjustment parameter is brightness, tone, temperature, contrast, gamma, or sharpness.
Description
CROSS-REFERENCE TO RELATED APPLICATION This patent application claims the benefit of U.S. Provisional Patent Application No. 63/479,200, filed Jan. 10, 2023, entitled “TARGETED IMAGE ADJUSTMENT”, which is incorporated by reference herein in its entirety. BACKGROUND Electronic messaging, particularly instant messaging, continues to grow globally in popularity. Users are quickly able to share with one another electronic media content items including text, audio, images, pictures and videos instantly. Current client or user devices, such as smartphones, are equipped with cameras for the user to quickly capture pictures and videos to be shared. However, these cameras still fail to work equitably for everyone in every situation or lighting conditions. The deficiencies of cameras stem from the camera design at its inception. The “Shirley Cards”, introduced in the 1940s, are the color reference cards used to perform skin-color balance in still photography printing. Cameras were designed specifically to capture the skin tone of the White woman that is featured on the “Shirley Cards.” Since the camera was not invented with people of all skin tones in mind, the design process failed to recognize the need for an extended dynamic range. Current cameras are still not appropriately designed to account for and optimize pictures and videos for all skin tones. When capturing pictures low light, current cameras search for light or a lightened part within the viewfinder before the shutter is released. If there is no lightened part, the camera will be focusing on a dark part within the viewfinder and is rendered inactive. In other words, the camera only knows how to calibrate itself against lightness to define an image. Similarly, innovative technology such as facial tracking is unable to recognize darker skin tones in some lighting conditions. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. Some non-limiting examples are illustrated in the figures of the accompanying drawings in which: FIG. 1 is a diagrammatic representation of a networked environment in which the present disclosure may be deployed, according to some examples. FIG. 2 is a diagrammatic representation of a messaging system, according to some examples, that has both client-side and server-side functionality. FIG. 3 is a diagrammatic representation of a data structure as maintained in a database, according to some examples. FIG. 4 is a diagrammatic representation of a message, according to some examples. FIG. 5 illustrates a system in which the head-wearable apparatus, according to some examples. FIG. 6 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed to cause the machine to perform any one or more of the methodologies discussed herein, according to some examples. FIG. 7 is a block diagram showing a software architecture within which examples may be implemented. FIG. 8 illustrates a process of performing targeted image adjustments in accordance with one embodiment. FIG. 9 illustrates an adjustment interface in accordance with one embodiment. FIG. 10 illustrates an adjustment interface in accordance with one embodiment. DETAILED DESCRIPTION Embodiments of the present disclosure improve the functionality of camera systems as well as electronic messaging software and systems by generating targeted adjustments to media content items including images (e.g., photos and videos) captured using the cameras that account for different skin tones in every situation or lighting conditions. This ensures that the cameras and the messaging system can equitably produce high quality images for every user and thus, improving the camera experience for all users. Specifically, embodiments of the present disclosure describe a targeted image adjustment system that implements an algorithm that automatically identifies at least one image adjustment parameter and at least one adjustment value to be applied to a media content item in order to generate an adjusted media content item. The image adjustment parameters can include, for example, brightness, tone, temperature, contrast, gamma, sharpness, etc. The media content item can be a pre-capture image or video that is being displayed in the viewfinder of the user system and being pre-processed by the target image adjustment system or can be an image or video that is captured using the camera of the user system and being post-processed by the targeted image adjustment system. The pre-capture image or video is an image or video that is not yet captured and stored by the camera, but is being displayed in the viewfinder as a preview for the