US-20260127788-A1 - SYSTEM AND METHOD FOR DIGITAL MAKEUP MIRROR
Abstract
A computer implemented method for emulating a mirror using camera video stream and a display screen to generate a digital mirror. The digital mirror is specifically configured for headshot applications, such as makeup and eyeglass tryout sessions. Provisions are made for correcting the appearance of the face on the screen. Specific implementations enable tracking the movement of the face or specific features of the face and applying virtual makeup or virtual glass or other accessories or other filters to the face. Also, recording sessions and auto-editing provides the user with easy accessibility to the tutorial session and the products used during the session. Products may be ordered from the user's mobile device at any time.
Inventors
- Ofer Saban
- Nissi Vilcovsky
Assignees
- EyesMatch Ltd.
Dates
- Publication Date
- 20260507
- Application Date
- 20250923
- Priority Date
- 20170404
Claims (18)
- 1 . (canceled)
- 2 . A system for automatically performing measurements for fitting eyewear, comprising: a video capture module for receiving digital images of a user's face; an event module for estimating head orientation in the digital images; a coefficient calculation module for identifying a known-size reference in the digital images and deriving from the known-size reference a pixel coefficient enabling conversion of pixel distance in the digital images to physical distance; an inter-pupillary (PD) module for using the pixel coefficient to generate a PD measurement from the digital images of the user's face.
- 3 . The system of claim 2 , wherein the known-size reference comprises iris diameter and the pixel coefficient is derived by measuring an iris diameter expressed in number of pixels and taking a ratio of the diameter expressed in number of pixels and an average human iris size expressed in millimeters to thereby correlate object size in the digital image to actual physical size.
- 4 . The system of claim 2 , further comprising a glasses module for identifying glasses in the digital images and using the pixel coefficient to generate at least one of lens height and lens width of the physical glasses.
- 5 . The system of claim 4 , further comprising an SH/OC module for using the pixel coefficient and the lens height to generate at least one of: segment height (SH) measurement and ocular center (OC) height from the digital images of the user's face.
- 6 . The system of claim 4 , wherein the glasses module identifies one of physical or virtual glasses in the digital images.
- 7 . The system of claim 2 , further comprising a prescription module for identifying lens area of glasses within the digital images and calculating optical power at multiple locations within the lens area and generating a prescription based on the optical power at multiple locations.
- 8 . The system of claim 2 , wherein the PD module defines a line passing through a left iris and right iris in the digital images; defines a nose center point on the line; and measures from the nose center a left pupillary distance and a right pupillary distance.
- 9 . The system of claim 2 , further comprising a virtual try-on (VTO) module for superimposing an eyeglasses image over the user's face; overlaying a graphical image indicating at least one of an ocular center height (OC) and segment height (SH) on the user's face; and providing a user interface for controlling the graphical image.
- 10 . The system of claim 2 , further comprising a registration module for providing an offset of the user's face orientation with respect ideal orientation.
- 11 . The system of claim 10 , wherein the registration module projects on a display screen a graphical target that indicates proper a position of the user's face for improved measurement accuracy.
- 12 . The system of claim 2 , further comprising: a display screen for displaying the digital images; a memory for storing a plurality of virtual articles; a virtual try-on (VTO) module for fetching a virtual article from the memory and scaling the virtual article using the pixel coefficient to display the virtual article on the display screen.
- 13 . The system of claim 12 , wherein the virtual article comprises an eye glasses frame, and wherein the VTO module further comprises an augmentation module for applying virtual lens characteristics to an interior part of the eye glasses frame.
- 14 . The system of claim 13 , wherein the lens characteristics comprise one or more of lens thickness, UV coating, tint color, and tint opacity.
- 15 . The system of claim 12 , wherein the virtual article comprises an alpha channel of eyeglasses and an RGB channel of the eyeglasses.
- 16 . The system of claim 2 , wherein the event module further comprises accuracy thresholds assigned to vertical gaze and horizontal gaze errors obtained from the head orientation to determine whether a digital image is acceptable for the PD measurement.
- 17 . The system of claim 2 , wherein the known-size reference comprises iris diameter and the pixel coefficient is derived by measuring a right eye iris diameter expressed in number of pixels and a left eye iris diameter expressed in number of pixels, and comparing the difference between the right eye iris diameter and the left eye iris diameter to a threshold; wherein when the difference is below the threshold, taking a ratio of an average of the right eye iris diameter and left eye iris diameter and the human iris size expressed in millimeters to thereby correlate object size in the digital image to actual physical size.
- 18 . The system of claim 2 , further comprising: an adjusting module for overlaying landmarks on the digital images of the user's face and generating an adjustment interface enabling a user to fine tune locations of the landmarks.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of co-pending U.S. patent application Ser. No. 18/581,350, filed on Feb. 19, 2024, which is a continuation of U.S. patent application Ser. No. 15/759,819, filed on Mar. 13, 2018, now U.S. Pat. No. 11,908,052, granted on Feb. 20, 2024, which is a 371 National Stage Entry of PCT/US 2017/040133, filed on Jun. 29, 2017, which claims the benefit of, and priority to, U.S. Provisional Patent Application No. 62/430,311 , filed on Dec. 5, 2016, and U.S. Provisional Patent Application No. 62/356,475 , filed on Jun. 29, 2016, and Canadian Patent Application No. 2,963,108, filed on Apr. 4, 2017. The entire disclosures of all of the above listed applications are incorporated herein by reference. BACKGROUND 1. Field This disclosure relates to digital mirrors and, more specifically, to digital mirrors that are specifically configured for headshots, such as makeup sessions and eye glass trying sessions. 2. Related Art The conventional mirror (i.e., reflective surface) is the common and most reliable tool for an individual to explore actual self-appearance, in real time. A few alternatives have been proposed around the combination of a camera and a screen to replace the conventional mirror. However, these techniques are not convincing and are not yet accepted as a reliable image of the individual as if he was looking at himself in a conventional mirror. This is mainly because the image generated by a camera is very different from an image generated by a mirror. Applicants have previously disclosed novel technologies for converting and transforming a still image or 2D or 3D video created by one or more cameras, with or without other sensors, into a mirror or video conference experience. Examples of Applicants'embodiments are described in, e.g., U.S. Pat. Nos. 7,948,481 and 8,982,109. The embodiments disclosed therein can be implemented for any general use of a mirror. Applicant followed with further disclosures relating to adapting the mirror to specific needs, such as, e.g., clothing stores. Examples of Applicants'embodiments are described in, e.g., U.S. Pat. Nos. 8,976,160 and 8,982,110. In many department and beauty stores, demonstration makeup sessions are provided to customers. The objective is that if the customer likes the result, the customer would purchase some of the items used during the demonstration. However, once the session is over and the customer has left the store, the customer may not remember which products have been used and also how to apply them. Moreover, at times the customer may want to try several different products, e.g., to compare different lipstick colors, but would not like to apply and remove different makeup products successively. SUMMARY The following summary of the disclosure is included in order to provide a basic understanding of some aspects and features of the invention. This summary is not an extensive overview of the invention and as such it is not intended to particularly identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented below. Disclosed embodiments include a transformation module which transforms the video stream received from the camera and generate a transformed stream which, when projected on the monitor screen, makes the image appear like a mirror's image. As can be experienced with devices having cameras mounted above the screen (e.g., video conference on a laptop), the image generated is not personal, as the user seems to be looking away from the camera. This is indeed the case, because the user is looking directly at the screen, but the camera is positioned above the screen. Therefore, the transformation module transforms each frame (i.e., each image) such that it appears as if it was taken by a camera positioned behind the screen-that is, the image appears as if the user is looking directly at a camera positioned behind the screen, even though the image is taken by a camera positioned above or besides the screen. The translation module adjusts the presentation of the image onto the screen, so that the face appears centered on the screen, regardless of the height of the user. An eyesmatch unit transforms the image of the eyes of the user, such that it appears that the eyes are centered and looking directly at the screen—just as when looking at a mirror. Also, an augmented reality module enables the application of virtual makeup on the user's image projected on the monitor screen. According to disclosed embodiments, a system is provided for capturing, storing and reorganizing a makeup session-whether real or virtual. A demonstration makeup session is done using any embodiment of the digital mirrors described herein. The demonstration makeup session is recorded at any desired length (e.g., usually it is 5-20 min).