Search

EP-4742193-A1 - SYSTEM AND METHOD FOR DETERMINATION OF 3D POSE IN A VEHICLE

EP4742193A1EP 4742193 A1EP4742193 A1EP 4742193A1EP-4742193-A1

Abstract

System for estimating 3D pose of a vehicle occupant, comprises a camera with a field of view of a cabin interior. One or more processors detect and classify a relative pose of a vehicle occupant (11,12,13,14) from the captured image. An absolute depth/location of a joint, e.g. a hip joint (16,17), of the occupant is determined from at least one known vehicle interior dimension relative to the camera. A hip plane of the occupant is determined, e.g. relative to a seating plane of the seat. The collected data is fused to estimate the pose in three-dimensional space based on the relative pose and the absolute depth. A seat occupancy algorithm can detect occupancy of a seat and adjust the seating plane based on adjustments made to the seat and also presence or absence of a child seat.

Inventors

  • KARAPETYAN, Ani
  • KOCHHAR, Anirudh
  • GEORGE, AMIL
  • REHFELD, Timo

Assignees

  • Aptiv Technologies AG

Dates

Publication Date
20260513
Application Date
20241108

Claims (15)

  1. A system for estimating 3D pose of a vehicle occupant, comprising: an image sensor configured to capture at least one image of a vehicle cabin interior with a field of view that includes at least one occupant; at least one processor configured to: detect and classify a relative pose of an occupant from the captured two-dimensional image; compute an absolute depth/location of a joint of the occupant using at least one known vehicle interior dimension; estimate the classified pose in three-dimensional space based on the relative pose and the absolute depth.
  2. The system according to claim 1, wherein the processor generates an output based on the 3D pose for use by a vehicle safety device.
  3. The system according to claim 2, comprising an airbag deployment device, wherein the output configures parameters of the airbag deployment device.
  4. The system according to any preceding claim, wherein the processor comprises a fusion module for fusing the outputs of a neural network that performs the detection and classification of relative pose, and prior measured information of the at least one known vehicle interior dimension.
  5. The system according to any preceding claim, wherein computation of the absolute depth/location of the single joint comprises determination of a seating plane.
  6. The system according to claim 5, wherein the joint is a hip joint and a hip plane is determined relative to the seating plane.
  7. The system according to claim 5 or 6, wherein the at least one processor is further configured to execute a seat occupancy algorithm for determining the presence or absence of an occupant in a seating position and/or whether the seating position has been adjusted, in which case the seating plane is updated.
  8. The system according to claim 7, wherein the seat occupancy algorithm comprises an initialization process beginning with a default seating plane and: when a seat is detected as empty, identifies known points in the cabin that will not be obscured by a vehicle occupant at the seating position to initialize the seating plane; or when a seat is detected as occupied, determines if seat adjustments have been made and, if so, updates the seating plane.
  9. The system according to claim 7 or 8, wherein the seat occupancy algorithm is configured to determine the presence of a child seat and: if no child seat is determined, sets calibration parameters for an adult seating plane; or if a child seat is determined, sets calibration parameters for a child seat seating plane.
  10. The system according to any preceding claim, further comprising a sensor for detecting an occupant in a seat and/or at least one seat adjustment device, wherein the processor is configured to log an adjustment by the seat adjustment device for assisting computation of the absolute depth/location of the joint of the occupant.
  11. A computer implemented method for estimating 3D pose of a vehicle occupant, comprising the steps of: capturing an image of an interior cabin of a vehicle, by a camera; utilizing a model to detect relative poses from the captured image, along with per-joint root/person-relative depth values, and output a pose classification; map the relative poses to absolute poses, by computing an absolute depth/location of a body joint of the occupant for each classified pose, based on a known dimension in the cabin relative to the camera.
  12. The computer implemented method of claim 11, wherein for a seating pose the body joint is a hip joint.
  13. The computer implemented method of claim 11, wherein for a non-seating pose, a root-depth estimation network is used.
  14. A non-transient computer readable medium comprising instructions which, when executed by a processor, implement the method of claims 11 to 13.
  15. A vehicle comprising the system of any of claims 1 to 10.

Description

Field The present disclosure relates to a system, method and associated software for determining/estimating a human pose, i.e. in three-dimensional space, in a vehicle. The invention is particularly relevant for implementing safety functions and related improvements in a vehicle. Background Modern vehicles, e.g. fully or semi-autonomous driving cars and/or those with advanced driving assistance systems (ADAS), offer significant improvements in safety for occupants. Such vehicles are typically equipped with onboard cameras that are capable of capturing images of the vehicle's interior, e.g. as part of a driver monitoring system (DMS). These images can then be used, often in combination with other sensors, for different safety related tasks. Such tasks may involve not only detecting occupants in the vehicle, but also categorizing people and their positions. Of course, a driver and passengers do not always sit still in a vehicle cabin and, instead, may adjust their pose or activity. Accordingly, tracking and determining occupant characteristics in three-dimensional space from a monocular camera image is challenging due to an inability to retrieve absolute depth information and, further, it is not possible for a human annotator to properly categorise and annotate 3D information on a 2D image for training models. Yet, absolute 3D pose estimation of an occupant is desirable and necessary for several downstream tasks like dynamic airbag deployment, seating pose classification, body size estimation, gesture recognition, etc. Summary In view of the above considerations, there is a need for improving/enabling in-cabin 3D human pose estimation. At the least, the invention should provide an alternative to available pose estimation methods in the automotive field. According to a first aspect, an in-cabin 3D pose estimation system is provided according to claim 1, e.g. including interpreting a 2D image, processed by a pose detection algorithm and based on a seating pose assumption. In one form, the invention is embodied by fusing the outputs of a neural network, seat occupancy algorithm, prior information about the camera and cabin, and sensor signals in the vehicle. Such a fusion ultimately enables the system to estimate absolute 3d poses. Broadly, the system and associated methodology is adapted to: capture/receive at least one image/frame of a vehicle cabin interior, e.g. from a camera in the cabin with a field of view (FOV) that includes at least one occupant; detect, e.g. including per-joint root/person-relative depth values, and classify a relative pose of an occupant from the captured image/frame; compute an absolute depth/location of a single joint (e.g. known point) for each classified pose using at least one known vehicle interior dimension; estimate (e.g. in a fusion module) the pose in three dimensional space based on the relative pose and the absolute depth. The estimate may be output for the purpose of setting parameters for a safety function of the vehicle, e.g. disabling or modifying airbag deployment. In this way, 3D pose is estimated more accurately, resulting in reliable implementation of safety features. In embodiments, computation of the absolute depth/location of the single joint comprises determination of a seating plane. The single joint may be a hip joint and a hip plane may be at or parallel to the seating plane. In this way, with a hip joint as a reference point for depth, determined relative to known dimensions in the vehicle within the FOV from the camera, accurate information about the 3D pose is estimated. In other words, the invention may utilize known, e.g. prior measured, information about the cabin, camera and vehicle seat sensors to determine a hip plane for each occupant. In embodiments, the system comprises at least one sensor for detecting an occupant at a seating position. In embodiments, an initialization process may be undertaken to identify known points in the cabin that will not be obscured by a vehicle occupant at the seating position. In this way, any adjustments in the seat height can be factored into 3D pose estimation. Initialisation may commence with a default seating plane that can be updated with the detected seating plane if the seat position is empty upon activation of the system. If the seat is occupied upon initialization, any seat adjustments may be detected (e.g. via sensors in the seat) for the purpose of updating the seating plane. Calibration parameters may be set based on the updated seating plane. A child seat (empty or not) may be detected (e.g. by image recognition) from the captured image, causing a recalibration of the seating plane. In this way, the seating plane will be adjusted to above the default/initial seating plane if a child seat is present. The system is embodied by a methodology according to claim 11. For example, where a first step requires a model to detect poses from a two-dimensional image, e.g. a software model trained to detect/track join