Search

US-12618685-B2 - Sentiment-based navigation

US12618685B2US 12618685 B2US12618685 B2US 12618685B2US-12618685-B2

Abstract

Sentiment-based navigation is provided herein. A method can include extracting features of sensor data captured by a sensor associated with a vehicle, wherein the sensor data is representative of a subject selected from a group of subjects comprising an occupant of the vehicle and an environment in which the vehicle is located, resulting in extracted features. The method can further include determining sentiment data representative of an emotional condition of the occupant of the vehicle based on an analysis of the extracted features, and generating a navigation route for the vehicle from an origin point to a destination point based on the sentiment data.

Inventors

  • Howard Lang
  • Joseph Soryal

Assignees

  • AT&T INTELLECTUAL PROPERTY I, L.P.

Dates

Publication Date
20260505
Application Date
20240626

Claims (20)

  1. 1 . A method, comprising: extracting, by a first system comprising a processor, features of sensor data captured by a sensor associated with an autonomous vehicle, wherein the sensor data is representative of a subject selected from a group of subjects comprising an occupant of the autonomous vehicle and an environment in which the autonomous vehicle is located, resulting in extracted features; determining, by the first system, sentiment data representative of an emotional condition of the occupant of the autonomous vehicle based on an analysis of the extracted features; generating, by the first system based on image processing data derived from satellite imaging of an area associated with the autonomous vehicle, first partial decision data, wherein the image processing data is distributed among the first system and a second system that is not the first system; receiving, by the first system from the second system, second partial decision data, the second partial decision data being determined by the second system based on the image processing data; and automatically navigating, by the autonomous vehicle, according to a navigation route, wherein the navigation route is generated based on the sentiment data, the first partial decision data, and the second partial decision data, and wherein the navigation route is generated further based on information relating to presence of line markings on respective roadways associated with the navigation route.
  2. 2 . The method of claim 1 , further comprising: transmitting, by the first system to a remote server that is distinct from the first system, the sentiment data; and receiving, by the first system from the remote server, route recommendation data generated by the remote server based on the sentiment data; wherein the navigation route is generated further based on the route recommendation data.
  3. 3 . The method of claim 1 , wherein the sensor comprises an audio sensor, and wherein the sensor data comprises audio data captured by the audio sensor.
  4. 4 . The method of claim 3 , wherein the audio data comprises data representative of speech originating from the occupant of the autonomous vehicle, and wherein the extracting the features of the sensor data comprises determining a property of the speech, the property being selected from a group of properties comprising voice tone and speech content.
  5. 5 . The method of claim 3 , wherein the extracting the features of the sensor data comprises detecting an audio event present in the audio data.
  6. 6 . The method of claim 3 , wherein the extracting the features of the sensor data comprises comparing an amount of audio activity present in the audio data to a defined baseline amount of audio activity, and wherein the determining the sentiment data comprises: determining an estimated level of distraction associated with the autonomous vehicle based on the comparing; and determining the sentiment data based on the estimated level of distraction.
  7. 7 . The method of claim 1 , wherein the sensor comprises a video sensor, and wherein the sensor data comprises video data captured by the video sensor.
  8. 8 . The method of claim 7 , wherein the video data comprises a depiction of the occupant of the autonomous vehicle, wherein the extracting the features of the sensor data comprises extracting a property of the video data from the depiction of the occupant in the video data, the property being selected from a group of properties comprising movement of the occupant and posture of the occupant, and wherein the determining the sentiment data comprises determining the sentiment data based on the property of the video data.
  9. 9 . The method of claim 1 , further comprising: obtaining, by the first system, device data from a mobile device associated with the occupant of the autonomous vehicle, and wherein the determining the sentiment data comprises determining the sentiment data further based on the device data.
  10. 10 . The method of claim 1 , wherein the line markings are utilized by the autonomous vehicle for directional guidance.
  11. 11 . A system of an autonomous vehicle, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, the operations comprising: extracting features of data captured by a sensor associated with the autonomous vehicle, resulting in extracted data features, wherein the data captured by the sensor is representative of a subject selected from a group of subjects comprising an occupant of the autonomous vehicle and an environment in which the autonomous vehicle is located; generating condition data representative of an emotional condition of the occupant of the autonomous vehicle by analyzing the extracted data features; receiving, via a satellite communication system, image processing data derived from satellite image data for an area in which the autonomous vehicle is located, the image processing data being distributed among systems comprising the system and a remote system that is distinct from the system; generating, based on the image processing data, first partial decision data; receiving, from the remote system, second partial decision data determined by the remote system based on the image processing data; and automatically navigating the autonomous vehicle according to a navigation route, wherein the navigation route is created based on the condition data, the first partial decision data, and the second partial decision data, and wherein the navigation route is created further based on information relating to a dimension of the autonomous vehicle.
  12. 12 . The system of claim 11 , wherein the operations further comprise: transmitting the condition data to a server that is distinct from the system; and receiving, from the server, route recommendation data generated by the server based on the condition data, wherein the creating the navigation route is further based on the route recommendation data.
  13. 13 . The system of claim 11 , wherein the sensor comprises an audio sensor, and wherein the data captured by the sensor comprises audio data.
  14. 14 . The system of claim 13 , wherein the audio data comprises speech data representative of speech by the occupant of the autonomous vehicle.
  15. 15 . The system of claim 14 , wherein the extracting the features comprises extracting a feature of the speech that is selected from a group of features comprising voice tone and speech content.
  16. 16 . The system of claim 13 , wherein the extracting the features comprises comparing an amount of audio activity present in the audio data to a defined baseline amount of audio activity, and wherein the operations further comprise: estimating a level of distraction associated with the autonomous vehicle based on the comparing; and generating the condition data based on the level of distraction.
  17. 17 . A non-transitory machine-readable medium, comprising executable instructions that, when executed by a processor of a first system of an autonomous vehicle, facilitate performance of operations, the operations comprising: determining data features that are representative of sensor data captured by a sensor associated with the autonomous vehicle, the sensor data including data representative of a subject selected from a group of subjects comprising an occupant of the autonomous vehicle and an environment in which the autonomous vehicle is located; generating sentiment data representative of an emotional condition of the occupant of the autonomous vehicle according to the data features; determining, based on image processing data corresponding to satellite image data depicting an area in which the autonomous vehicle is located, first partial decision data, wherein the image processing data is distributed among systems comprising the first system and a second system that is not the first system; receiving, from the second system, second partial decision data, the second partial decision data being determined by the second system based on the image processing data; and automatically navigating the autonomous vehicle according to route data, wherein the route data is representative of a navigation route for the autonomous vehicle and is based on the sentiment data, the first partial decision data, and the second partial decision data, and wherein the route data representative of the navigation route is further based on information relating to a speed capability of the autonomous vehicle.
  18. 18 . The non-transitory machine-readable medium of claim 17 , wherein the operations further comprise: transmitting the sentiment data to a remote server via a communication network; and receiving, from the remote server, route recommendation data generated by the remote server based on the sentiment data, wherein the route data is further based on the route recommendation data.
  19. 19 . The non-transitory machine-readable medium of claim 17 , wherein the sensor comprises an audio sensor, and wherein the sensor data comprises audio data captured by the audio sensor.
  20. 20 . The non-transitory machine-readable medium of claim 19 , wherein the audio data comprises speech data representative of speech originating from the occupant of the autonomous vehicle, and wherein the operations further comprise: extracting, as a data feature of the data features, a property of the speech that is selected from a group of properties comprising voice tone and speech content.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S) This application is a continuation of U.S. patent application Ser. No. 17/307,465 filed May 4, 2021. All sections of the aforementioned application(s) and/or patent(s) are incorporated herein by reference in their entirety. TECHNICAL FIELD The present disclosure relates to vehicle navigation systems, and, in particular, to techniques for creating or modifying vehicle navigation data. BACKGROUND Vehicle navigation applications, such as those employed by in-vehicle navigation systems, mobile device applications, or the like, can be utilized in combination with position location technologies such as the Global Positioning System (GPS) to guide a vehicle along a navigation route to a specified destination. Existing navigation systems can take into account the condition and characteristics of the vehicle (e.g., size and/or performance of the vehicle, remaining fuel, tire condition, etc.), as well as external factors such as traffic, road, and/or weather conditions along potential routes, in determining a navigation route to be followed. As vehicle technology advances, e.g., in the area of autonomous and/or semi-autonomous vehicles, vehicle navigation systems will become increasingly desirable in ensuring proper vehicle operation. DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a system that facilitates sentiment-based navigation in accordance with various aspects described herein. FIG. 2 is a block diagram that depicts the functionality of the navigation device of FIG. 1 in further detail in accordance with various aspects described herein. FIG. 3 is a diagram that depicts data types that can be utilized by the navigation device of FIG. 1 in accordance with various aspects described herein. FIG. 4 is a block diagram of a system that facilitates data collection for a vehicle navigation system in accordance with various aspects described herein. FIG. 5 is a block diagram of a system that facilitates extraction of sensor data features for sentiment-based navigation in accordance with various aspects described herein. FIG. 6 is a block diagram of a system that facilitates integration of user devices with a vehicle navigation system in accordance with various aspects described herein. FIG. 7 is a diagram that depicts data sources that can be utilized by a vehicle navigation system for sentiment-based navigation in accordance with various aspects described herein. FIG. 8 is a block diagram of a system that facilitates distributed route processing in a vehicle navigation system in accordance with various aspects described herein. FIG. 9 is a diagram that depicts an example network environment in which various embodiments described herein can function. FIG. 10 is a flow diagram of a method that facilitates sentiment-based navigation in accordance with various aspects described herein. FIG. 11 depicts an example computing environment in which various embodiments described herein can function. DETAILED DESCRIPTION Various specific details of the disclosed embodiments are provided in the description below. One skilled in the art will recognize, however, that the techniques described herein can in some cases be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects. In an aspect, a method as described herein can include extracting, by a system including a processor, features of sensor data captured by a sensor associated with a vehicle, where the sensor data is representative of an occupant of the vehicle or an environment in which the vehicle is located, resulting in extracted features. The method can further include determining, by the system, sentiment data representative of an emotional condition of the occupant of the vehicle based on an analysis of the extracted features. The method can additionally include generating, by the system, a navigation route for the vehicle from an origin point to a destination point based on the sentiment data. In another aspect, a system as described herein can include a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can include extracting features of data captured by a sensor associated with a vehicle, resulting in extracted data features, where the data captured by the sensor is representative of an occupant of the vehicle or an environment in which the vehicle is located; generating condition data representative of an emotional condition of the occupant of the vehicle by analyzing the extracted data features; and creating a navigation route for the vehicle from an origin point to a destination point based on the condition data. In a further aspect, a non-transitory machine-readable medium as described herein can include executable instructions that, w