US-12621428-B2 - Fail safe surround view
Abstract
A technique including capturing, by one or more cameras of a set of cameras disposed about a vehicle, one or more images, wherein a surround view system of the vehicle is configured to render a surround view image using a first hardware accelerator based on the one or more images, determining that a first hardware accelerator is unavailable, and rendering the surround view image using a second hardware accelerator based on the captured one or more images.
Inventors
- Shashank Dabral
- Aishwarya Dubey
- Gowtham Abhilash TAMMANA
Assignees
- TEXAS INSTRUMENTS INCORPORATED
Dates
- Publication Date
- 20260505
- Application Date
- 20240821
Claims (20)
- 1 . A method, comprising: receiving, by a device, image data from a set of cameras, wherein the set of cameras includes a first camera and a remainder of the set of cameras; determining, by the device, whether or not there is a failure associated with the first camera; and based on determining that there is a failure associated with the first camera, generating a composite image based on the image data received from the remainder of the set of cameras, but not the first camera.
- 2 . The method of claim 1 , further comprising: detecting the failure associated with the first camera based on receiving a signal indicative of the failure, not receiving a signal indicative of lack of the failure, receiving repeated image data from the first camera, or a combination thereof.
- 3 . The method of claim 1 , wherein generating the composite image comprises: determining a blend line representative of a boundary between a first area visibly covered by the first camera and a second area visibly covered by the remainder of the set of cameras; adjusting the blend line so as to adjust the first area; and generating the composite image based on the adjusted blend line.
- 4 . The method of claim 1 , further comprising: determining whether there is a failure associated with a first image processing component of the device; and based on determining that there is a failure associated with the first image processing component, generating the composite image using a second image processing component of the device.
- 5 . The method of claim 4 , wherein the first image processing component is a first hardware accelerator, and the second image processing component is a second hardware accelerator.
- 6 . The method of claim 4 , wherein the first image processing component is a graphics processing unit (GPU), and the second image processing component is a DSP.
- 7 . The method of claim 4 , wherein the first image processing component operates on a first operating system (OS), and the second image processing component operates on a second OS or operates without an OS.
- 8 . The method of claim 4 , further comprising: detecting the failure associated with the first image processing component based on determining whether or not a frame is generated by the first image processing component.
- 9 . The method of claim 8 , wherein determining whether or not a frame is generated comprises determining whether or not the frame is generated based on information in a register received from the first image processing component.
- 10 . The method of claim 9 , wherein: the information comprises frame information, camera state information, or a combination thereof; the frame information comprises a time stamp associated with a frame, a frame number associated with a frame, or a combination thereof; and the camera state information comprises a status of the first camera.
- 11 . The method of claim 1 , wherein the set of cameras is disposed about a vehicle, and wherein the composite image is a surround view image of the vehicle.
- 12 . A system, comprising: a first image processing component configured to: receive image data from a set of cameras, wherein the set of cameras includes a first camera and a remainder of the set of cameras; determine whether or not there is a failure associated with the first camera; and based on a determination that there is a failure associated with the first camera, generate a composite image based on the image data received from the remainder of the set of cameras, but not the first camera.
- 13 . The system of claim 12 , wherein the first image processing component is further configured to: detect the failure associated with the first camera based on receipt of a signal indicative of the failure, not receipt of a signal indicative of lack of the failure, receipt of repeated frames of same image data from the first camera, or a combination thereof.
- 14 . The system of claim 12 , wherein to generate the composite image, the first image processing component is configured to: determine a blend line representative of a boundary between a first area visibly covered by the first camera and a second area visibly covered by the remainder of the set of cameras; adjust the blend line so as to adjust the first area; and generate the composite image based on the adjusted blend line.
- 15 . The system of claim 12 , further comprising: a second image processing component configured to: determine whether there is a failure associated with the first image processing component; and based on a determination that there is a failure associated with the first image processing component, generate the composite image.
- 16 . The system of claim 15 , wherein the first image processing component is a graphics processing unit (GPU), and the second image processing component is a DSP.
- 17 . The system of claim 15 , wherein the first image processing component operates on a first operating system (OS), and the second image processing component operates on a second OS or operates without an OS.
- 18 . The system of claim 15 , wherein the second image processing component is further configured to: detect the failure associated with the first image processing component based on a determination whether or not a frame is generated by the first image processing component within a specified period of time.
- 19 . The system of claim 18 , wherein the second image processing component is configured to determine whether or not a frame is generated by the first image processing component based on information associated with the frame in a register received from the first image processing component.
- 20 . The system of claim 12 , wherein the set of cameras is disposed about a vehicle, and wherein the composite image is a surround view image of the vehicle.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of and claims priority to U.S. application Ser. No. 17/538,661, filed Nov. 30, 2021, which is hereby incorporated herein by reference in its entirety. BACKGROUND Increasingly, vehicles, such as cars, airplanes, robots, etc., are being equipped with multiple external cameras to provide to the operator of the vehicle external views of the area surrounding the vehicle. These external views are commonly used to help maneuver the vehicle, such as when backing up or parking a car. These external views are a safety system as they can help vehicle operators detect and avoid objects, people, animals, etc., that may not be otherwise visible. Multiple camera views may be stitched together to form an external surround view around the vehicle. Generating these multi-camera views requires multiple cameras, failure of one or more cameras and/or camera support systems can hinder operations of such systems. Additionally, software for stitching and rendering the camera views may be running in the context of a high level operating system (HLOS) and software issues may also cause the software for stitching and rendering the camera views to become non-operational. Therefore, it is desirable to have an improved technique for fail safe operation of a surround view system. SUMMARY This disclosure relates to a technique including capturing, by one or more cameras of a set of cameras disposed about a vehicle, one or more images, wherein a surround view system of the vehicle is configured to render a surround view image using a first hardware accelerator based on the one or more images. The technique further includes determining that a first hardware accelerator is unavailable and rendering the surround view image using a second hardware accelerator based on the captured one or more images. Another aspect of the present disclosure relates to an electronic device, including an image signal processor configured to receive one or more images captured by one or more cameras, of a set of cameras, disposed about a vehicle. The electronic device further includes a first processor configured to execute instructions to cause the first processor to render a surround view image around a vehicle using a first hardware accelerator based on the received one or more images. The electronic device also includes a second processor configured to execute instructions to cause the second processor to determine that the first hardware accelerator is unavailable and cause the surround view image to be rendered using a second hardware accelerator based on the one or more images captured. Another aspect of the present disclosure relates to non-transitory program storage device including instructions stored thereon to cause a first processor to render a surround view image around a vehicle using a first hardware accelerator based on one or more cameras, of a set of cameras, disposed about a vehicle. The instructions further cause a second processor to determine that the first hardware accelerator is unavailable and cause the surround view image to be rendered using a second hardware accelerator. BRIEF DESCRIPTION OF THE DRAWINGS For a detailed description of various examples, reference will now be made to the accompanying drawings in which: FIGS. 1A and 1B are diagrams illustrating a technique for producing a 3D surround view, in accordance with aspects of the present disclosure. FIG. 2 is a block diagram illustrating a device for producing a surround view, in accordance with aspects of the present disclosure. FIG. 3 is a block diagram illustrating a technique for fail safe surround view, in accordance with aspects of the present disclosure. FIG. 4 is a block diagram illustrating a device for producing a fail-safe surround view, in accordance with aspects of the present disclosure. FIG. 5 is an organizational flowchart illustrating a technique for detecting a HLOS failure, in accordance with aspects of the present disclosure. FIG. 6A is a drawing illustrating a non-compensated (e.g., regular) image from a surround view system with a failed camera, in accordance with aspects of the present disclosure. FIG. 6B is a drawing illustrating a compensated image from a surround view system with a failed camera, in accordance with aspects of the present disclosure. FIG. 7 is a flow diagram illustrating a technique for generating a fail safe surround view, in accordance with aspects of the present disclosure The same reference number is used in the drawings for the same or similar (either by function and/or structure) features. DETAILED DESCRIPTION FIG. 1A is a diagram illustrating a technique for producing a 3D surround view, in accordance with aspects of the present disclosure. The process for producing a 3D surround view produces a composite image from a viewpoint that appears to be located directly above a vehicle looking straight down. In essence, a virtual top view of the neighborhood aroun