Search

KR-102961070-B1 - Software development kit for image processing

KR102961070B1KR 102961070 B1KR102961070 B1KR 102961070B1KR-102961070-B1

Abstract

The modular image processing SDK includes APIs for receiving API calls from third-party software running on a portable device, including a camera. The SDK logic receives and processes commands and parameters received from the APIs based on the API calls received from the third-party software. The annotation system performs image processing operations on the feed from the camera based on the image processing commands and parameters received by the annotation system from the SDK logic. Image processing is based at least partially on augmented reality content generator data (or AR content generator), user input, and sensor data.

Inventors

  • 찰튼, 에보니 제임스
  • 만디아, 패트릭
  • 모우르코기아니스, 셀리아 니콜
  • 소콜로브, 미카일로

Assignees

  • 스냅 인코포레이티드

Dates

Publication Date
20260507
Application Date
20210601
Priority Date
20210503

Claims (20)

  1. As a software development kit (SDK), An application programming interface (API) for receiving API calls from a third-party application running on a portable device - said portable device includes a camera -; SDK logic for receiving and processing commands and parameters received from the API based on the API calls received from the third-party application - the API calls include an augmented reality content identifier -; and The annotation system for performing image processing operations for the third-party application on a feed from the camera based on image processing commands and parameters received by the annotation system from the SDK logic. Includes, Based on the fact that the above SDK cannot retrieve the image processing commands and parameters from a local data storage within the portable device, the SDK logic obtains the image processing commands and parameters from a server hosted by the provider of the SDK, and The above image processing commands and parameters are obtained from the server hosted by the provider of the SDK using an organization identifier identifying the developer or provider of the third-party application and using the augmented reality content identifier.
  2. In claim 1, the annotation system is an SDK that operates on a feed from the camera based on augmented reality content generator data identified by the augmented reality content identifier.
  3. delete
  4. In paragraph 2, the third-party application is an SDK that receives third-party data for processing from a server hosted by the developer or provider of the third-party application.
  5. delete
  6. delete
  7. delete
  8. delete
  9. delete
  10. An SDK according to claim 1, wherein the image processing operations correspond to image processing operations available in a messaging application, and the third-party application is configured to perform the image processing operations independently of the messaging application.
  11. As a system, One or more processors of the machine; camera; Display; and Memory storing instructions including SDKs and third-party software applications Includes, and the above SDK is: An application programming interface (API) for receiving API calls from the above-mentioned third-party software application, SDK logic for receiving and processing commands and parameters received from the API based on the API calls received from the third-party software application - the API calls include an augmented reality content identifier -; and The annotation system for performing image processing operations on a feed from the camera based on image processing commands and parameters received by the annotation system from the SDK logic. Includes, Based on the fact that the above SDK cannot retrieve the image processing commands and parameters from the local data store of the system, the SDK logic obtains the image processing commands and parameters from a server hosted by the provider of the SDK, and The above image processing commands and parameters are obtained from the server hosted by the provider of the SDK using an organization identifier identifying the developer or provider of the third-party application and using the augmented reality content identifier.
  12. In claim 11, the SDK further comprises a set of augmented reality content generators including commands and parameters for applying augmented reality experiences to an image or video feed, and the annotation system in use performs image processing operations based on a user selection of a specific augmented reality content generator.
  13. In Clause 12, the set of augmented reality content generators is a system that is locally stored in system memory.
  14. delete
  15. In paragraph 11, the third-party software application is a system that receives third-party data for processing from a server hosted by the developer or provider of the third-party software application.
  16. delete
  17. In paragraph 12, the parameters of the augmented reality content generators include geographical and temporal limitations, in a system.
  18. delete
  19. In paragraph 11, the annotation system processes the feed from the camera based on the configuration of the system, specified object tracking models, user input, and location sensor data.
  20. In claim 11, the image processing operations correspond to image processing operations available in a messaging application, and the image processing operations are available through the SDK without launching the messaging application, a system.

Description

Software development kit for image processing Cross-reference of related applications This application claims the benefit of U.S. Provisional Application No. 63/037,348 filed June 10, 2020, and U.S. Patent Application No. 17/302,424 filed May 3, 2021, the contents of each of which are incorporated herein by reference in their entirety. With the increasing use of digital images, the affordability of portable computing devices, the availability of increased capacity in digital storage media, and improved bandwidth and accessibility of network connections, digital images and videos have become an integral part of the daily lives of an increasing number of people. Additionally, device users expect that the experience of using apps on portable computing devices will continue to become more sophisticated and media-rich. In drawings that are not necessarily drawn to a fixed scale, similar numbers may describe similar components in different drawings. To facilitate the identification of discussions regarding any specific element or action, the top digit or number in the reference numbers refers to the figure number where the element is first introduced. Embodiments are illustrated in the drawings of the accompanying drawings as examples, not limitations. FIG. 1 is a schematic representation of a networked environment in which the present disclosure may be distributed, according to some examples. Figure 2 is a schematic representation of the architecture of the app exemplified in Figure 1, and its relationship to the developer database and SDK server system of Figure 1. Figure 3 is a block diagram illustrating various modules of the annotation system of Figure 2. FIG. 4 illustrates exemplary user interfaces depicting a carousel for selecting AR content generator data and applying it to media content. FIG. 5 illustrates exemplary user interfaces depicting optional features that can be provided to the user interfaces of FIG. 4. FIG. 6 illustrates exemplary user interfaces that can be resulted from additional user actions using the user interfaces of FIG. 4 and FIG. 5. FIG. 7 illustrates a user interface that can be displayed when there is only one available AR content generator. FIG. 8 is a flowchart illustrating exemplary methods for navigating the user interfaces of FIGS. 4 through 7. FIG. 9 is a schematic representation of a machine in the form of a computer system in which a set of instructions for causing the machine to perform any one or more of the methods discussed in this specification may be executed, according to some examples. Figure 10 is a block diagram showing a software architecture in which examples can be implemented. Users with diverse interests and located in various places can capture digital images of various subjects and make the captured images available to others through networks such as the Internet. It can be difficult and computationally intensive to enable computing devices to perform image processing or image enhancement operations on various objects and/or features captured under a wide range of varying conditions (e.g., changes in image scales, noise, lighting, motion, or geometric distortion). Additionally, third-party developers of apps intended for use on personal devices may wish to provide enhanced visual effects but may lack the know-how or budget to provide such effects in their apps. The original developers of systems and technologies that support enhanced visual effects (SDK providers) may enable the use of such effects in apps released by third-party app developers by providing a modular software development kit (SDK), as described in more detail below. As used herein, the terms “third-party developer,” “app developer,” and “developer” are not limited to the actual developers themselves but include individuals and entities that host, provide, or own related software, apps, SDKs, or services that may have originally been developed by others. In some cases, the SDK provider also provides a messaging application that includes image modification capabilities as described herein. The SDK provides third-party access to such image modification capabilities to enable a third party to provide image modification features in their app independently of launching the SDK provider's messaging application. As discussed in this specification, the infrastructure supports the creation, viewing, and/or sharing of interactive or enhanced two-dimensional or three-dimensional media in apps released by app developers. The system also supports the creation, storage, and loading of external effects and asset data by third-party developers for use by apps running on client devices. As described herein, images, videos, or other media for enhancement may be captured from a live camera or retrieved from a local or remote data store. In one example, an image is rendered using the system to visualize spatial details/geometry of what the camera sees, in addition to traditional image textures. When a viewer i