Search

US-12620108-B2 - Privacy preservation system

US12620108B2US 12620108 B2US12620108 B2US 12620108B2US-12620108-B2

Abstract

A system comprises a computer including one or more processors and memory. The memory includes instructions such that the one or more processors are programmed to receive at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors, where the RGB image data and the IR image data represents an environment including at least one individual disposed along a support surface. The one or more processors receive annotated depth image data that corresponds to the RGB image data and the IR image data. The one or more processors train a neural network with at least one of the RGB image data and the IR image data corresponding to the annotated depth image data, where the neural network is trained to predict when the individual is exiting the support surface.

Inventors

  • Joshua M. Brown-Kramer
  • Steve Kiene
  • Benjamin D. Rush
  • Lucas A. Sabalka
  • Jacob Williams

Assignees

  • Ocuvera LLC

Dates

Publication Date
20260505
Application Date
20230918

Claims (20)

  1. 1 . A system comprising a computer including one or more processors and memory, the memory including instructions such that the one or more processors are programmed to: receive at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors, wherein the RGB image data or the IR image data represents an environment including at least one individual disposed along a support surface; receive annotated depth image data that corresponds to the at least one of the RGB image data or the IR image data; and train a neural network with the received annotated depth image data and the received at least one of the RGB image data or the IR image data corresponding to the annotated depth image data, wherein the neural network is trained to predict when the at least one individual is exiting the support surface.
  2. 2 . The system as recited in claim 1 , wherein the one or more processors are further programmed to train the neural network by: receiving training data and training labels, wherein the training data comprises depth image frames and the training labels comprise annotations pertaining to objects within the training data.
  3. 3 . The system as recited in claim 2 , wherein the one or more processors are further programmed to train the neural network by: associating the training data with either the RGB image data or the IR image data based on one or more mapping techniques.
  4. 4 . The system as recited in claim 3 , wherein the one or more processors are further programmed to train the neural network by: training the neural network for a predetermined number of epochs based on the training data and the training labels, wherein at least one of one or more weights and one or more parameters of the neural network are updated according to a loss function of the neural network.
  5. 5 . The system as recited in claim 4 , wherein the one or more processors are further programmed to train the neural network by: comparing an output of the neural network with ground truth to determine a calculated loss.
  6. 6 . The system as recited in claim 5 , wherein the one or more processors are further programmed to train the neural network by: in response to determining the calculated loss is greater than a predetermined loss threshold, updating at least one of the one or more weights and the one or more parameters of the neural network.
  7. 7 . The system as recited in claim 5 , wherein the one or more processors are further programmed to train the neural network by: in response to determining the calculated loss is below or equal to a predetermined loss threshold, cease training the neural network.
  8. 8 . The system as recited in claim 1 , wherein the one or more processors are further programmed to map pixels of the depth image data to at least one of pixels of the at least one of the RGB image data or the IR image data.
  9. 9 . The system as recited in claim 1 , wherein the one or more processors are further programmed to update at least one weight or parameter of the neural network.
  10. 10 . The system as recited in claim 1 , wherein the neural network comprises a convolutional neural network.
  11. 11 . The system as recited in claim 1 , wherein the at least one of the RGB image data or the IR image data depict personally-identifiable information (PII).
  12. 12 . The system as recited in claim 1 , wherein the at least one individual is a patient in a medical environment.
  13. 13 . A method, comprising: receiving, by one or more processors, at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors, wherein the RGB image data or the IR image data represents an environment including at least one individual disposed along a support surface; receiving, by the one or more processors, annotated depth image data that corresponds to the at least one of the RGB image data or the IR image data; and training, by the one or more processors, a neural network with the received annotated depth image data and the received at least one of the RGB image data and the IR image data corresponding to the annotated depth image data, wherein the neural network is trained to predict when the individual is exiting the support surface.
  14. 14 . The method of claim 13 , further comprising: receiving training data and training labels, wherein the training data comprises depth image frames and the training labels comprise annotations pertaining to objects within the training data.
  15. 15 . The method of claim 14 , further comprising: associating the training data with either the RGB image data or the IR image data based on one or more mapping techniques.
  16. 16 . The method of claim 15 , further comprising: training the neural network for a predetermined number of epochs based on the training data and the training labels, wherein at least one of one or more weights and one or more parameters of the neural network are updated according to a loss function of the neural network.
  17. 17 . The method of claim 16 , further comprising: comparing an output of the neural network with ground truth to determine a calculated loss.
  18. 18 . The method of claim 17 , further comprising: in response to determining the calculated loss is greater than a predetermined loss threshold, updating at least one of the one or more weights and the one or more parameters of the neural network.
  19. 19 . The method of claim 17 , further comprising: in response to determining the calculated loss is below or equal to a predetermined loss threshold, ceasing training the neural network.
  20. 20 . A system comprising a first computer and a second computer, each including one or more processors and memory, the memory including instructions such that the one or more processors are programmed to: receive at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors of the first computer, wherein the RGB image data or the IR image data represents an environment including at least one individual disposed along a support surface; receive depth image data that corresponds to the at least one of the RGB image data or the IR image data, wherein the received depth image data does not include personally-identifiable information (PII) of the at least one individual, the second computer configured to receive the depth image data; annotate the received depth image data, the second computer configured to annotate the received depth image data; receive the annotated depth image data, the first computer configured to receive the annotated depth image data from the second computer; train a neural network based on the received annotated depth image data and the received at least one of the RGB image data or the IR image data, wherein the neural network is trained to predict when the at least one individual is exiting the support surface.

Description

CROSS-REFERENCE TO RELATED APPLICATION This application claims priority to U.S. Provisional Application No. 63/376,309, filed Sep. 20, 2022. The contents of the application are incorporated herein by reference in its entirety. BACKGROUND Cameras can capture images within the camera's field-of-view. Cameras may be configured to capture data representing color images, i.e., Red-Green-Blue (RGB) images, infrared images, and/or depth images. In some implementations, cameras capture depth frame data by transmitting a near-infrared light over a portion of the camera's field-of-view and determine a time of flight (TOF) associated with the transmitted light. Some cameras also capture infrared images by detecting and measuring the infrared energy of objects within the camera's field-of-view. SUMMARY A system includes a computer including one or more processors and memory. The memory includes instructions such that the one or more processors are programmed to receive at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors. The RGB image data and the IR image data represents an environment including at least one individual disposed along a support surface. The one or more processors receive annotated depth image data that corresponds to the RGB image data and the IR image data. The one or more processors train a neural network with at least one of the RGB image data and the IR image data corresponding to the annotated depth image data, where the neural network is trained to predict when the at least one individual is exiting the support surface. In another aspect, the one or more processors are further programmed to train the neural network by receiving training data and training labels, wherein the training data comprises depth image frames and the training labels comprise annotations pertaining to objects within the training data. In yet another aspect, the one or more processors are further programmed to train the neural network by associating the training data with either the RGB image data or the IR image data based on one or more mapping techniques. In an aspect, the one or more processors are further programmed to train the neural network by training the neural network for a predetermined number of epochs based on the training data and the training labels, where at least one of one or more weights and one or more parameters of the neural network are updated according to a loss function of the neural network. In another aspect, the one or more processors are further programmed to train the neural network by comparing an output of the neural network with ground truth to determine a calculated loss. In yet another aspect, in response to determining the calculated loss is greater than a predetermined loss threshold, the one or more processors update at least one of the one or more weights and the one or more parameters of the neural network. In an aspect, in response to determining the calculated loss is below or equal to a predetermined loss threshold, the one or more processors cease training the neural network. In another aspect, the one or more processors are further programmed to map pixels of the depth image data to at least one of pixels of the at least one of the RGB image data or the IR image data. In yet another aspect, the one or more processors are further programmed to update at least one weight or parameter of the neural network. In an aspect, the neural network comprises a convolutional neural network. In another aspect, the support surface is one of a chair and a bed. In yet another aspect, the at least one of the RGB image data or the IR image data depict personally-identifiable information (PII). In an aspect, the at least one individual is a patient in a medical environment. In another aspect, a method is disclosed and includes receiving, by one or more processors, at least one of Red-Green-Blue (RGB) image data or infrared (IR) image data from one or more cameras communicatively connected to the one or more processors, where the RGB image data and the IR image data represents an environment including at least one individual disposed along a support surface. The method includes receiving, by the one or more processors, annotated depth image data that corresponds to the RGB image data and the IR image data. The method includes training, by the one or more processors, a neural network with at least one of the RGB image data and the IR image data corresponding to the annotated depth image data, where the neural network is trained to predict when the individual is exiting the support surface. In another aspect, the method further includes receiving training data and training labels, wherein the training data comprises depth image frames and the training labels comprise annotations pertaining to objects within the training data. In yet another aspect, the method further includes associating the trainin