Search

US-12625586-B2 - Methods for classifying touch inputs on a capacitive interface

US12625586B2US 12625586 B2US12625586 B2US 12625586B2US-12625586-B2

Abstract

Embodiments of the present application provide methods for classifying a detected touch input on a capacitive interface. A sequence of capacitive images representing capacitive sensor data associated with touch inputs detected during a continuous touch gesture is obtained. The capacitive images are sequentially processed using a machine learning model. A touch region of the capacitive image that corresponds to an area of the capacitive interface associated with a current touch input for the capacitive image is identified. A set of features of the identified touch region is extracted using the machine learning model. A state of touch of the capacitive image is determined based on the extracted set of features and one or more previous capacitive images associated with previously detected touch inputs during the continuous touch gesture.

Inventors

  • Manpreet Singh TAKKAR
  • Wei Zhou
  • Shung Hei Lynn
  • Shuping Lu

Assignees

  • HUAWEI TECHNOLOGIES CO., LTD.

Dates

Publication Date
20260512
Application Date
20240613

Claims (20)

  1. 1 . A method for classifying a detected touch input on a capacitive interface, the method comprising: obtaining a sequence of capacitive images representing capacitive sensor data associated with touch inputs detected during a continuous touch gesture; and sequentially processing the capacitive images using a machine learning model that includes a trained convolutional neural network layer, wherein for each capacitive image in the sequence, processing the capacitive image using the machine learning model comprises: determining a sub-image of the capacitive image containing a touch region that corresponds to an area of the capacitive interface associated with a current touch input for the capacitive image; performing feature extraction of the sub-image using the trained convolutional neural network layer; and determining a state of touch of the capacitive image based on extracted features of the sub-image and one or more previous capacitive images associated with the continuous touch gesture.
  2. 2 . The method of claim 1 , wherein the machine learning model comprises a recurrent neural network layer and wherein determining the state of touch of the capacitive image comprises providing the extracted features and the one or more previous capacitive images as input to the recurrent neural network layer.
  3. 3 . The method of claim 1 , wherein determining the sub-image of the capacitive image comprises determining, based on the capacitive sensor data, a region of the capacitive image that is associated with signal strength indicating the detected touch input, wherein performing the feature extraction on the sub-image comprises providing the sub-image as input to the machine learning model.
  4. 4 . The method of claim 3 , further comprising: processing the sub-image using a high-pass filter to obtain a filtered sub-image; and determining a normalized sub-image based on the filtered sub-image, wherein the normalized sub-image is provided as input to the machine learning model.
  5. 5 . The method of claim 1 , wherein the state of touch indicates an intended amount of applied force associated with the current touch input.
  6. 6 . The method of claim 1 , wherein the continuous touch gesture comprises a force-based press gesture on the capacitive interface.
  7. 7 . The method of claim 1 , wherein the machine learning model uses one or more defined thresholds in determining the state of touch of the capacitive image.
  8. 8 . The method of claim 7 , further comprising: obtaining sample touch data including one or more of: first touch data associated with at least one first touch input of a first type; second touch data associated with at least one second touch input of a second type; third touch data associated with at least one third touch input that transitions from a touch input of the first type to a touch input of the second type; or fourth touch data associated with at least one fourth touch input that transitions from a touch input of the second type to a touch input of the first type, wherein the one or more defined thresholds are determined based on the obtained sample touch data.
  9. 9 . The method of claim 7 , further comprising: determining a distribution of outputs of a neural network of the machine learning model based on touch input data collected from multiple users; and determining a plurality of sensitivity levels based on the distribution, each sensitivity level being associated with a respective threshold value of output of the neural network, wherein the one or more defined thresholds correspond to user selections of sensitivity levels.
  10. 10 . An apparatus comprising: at least one processor; and a non-transitory computer readable storage medium storing programming, the programming including instructions that, when executed by the at least one processor, cause the apparatus to perform operations for classifying a detected touch input on a capacitive interface, the operations comprising: obtaining a sequence of capacitive images representing capacitive sensor data associated with touch inputs detected during a continuous touch gesture; and sequentially processing the capacitive images using a machine learning model that includes a trained convolutional neural network layer, wherein for each capacitive image in the sequence, processing the capacitive image using the machine learning model comprises: determining a sub-image of the capacitive image containing a touch region that corresponds to an area of the capacitive interface associated with a current touch input for the capacitive image; performing feature extraction of the sub-image using the trained convolutional neural network layer; and determining a state of touch of the capacitive image based on extracted features of the sub-image and one or more previous capacitive images associated with the continuous touch gesture.
  11. 11 . The apparatus of claim 10 , wherein the machine learning model comprises a recurrent neural network layer and wherein determining the state of touch of the capacitive image comprises providing the extracted features and the one or more previous capacitive images as input to the recurrent neural network layer.
  12. 12 . The apparatus of claim 10 , wherein determining the sub-image of the capacitive image comprises determining, based on the capacitive sensor data, a region of the capacitive image that is associated with signal strength indicating the detected touch input, wherein performing the feature extraction on the sub-image comprises providing the sub-image as input to the machine learning model.
  13. 13 . The apparatus of claim 12 , the operations further comprising: processing the sub-image using a high-pass filter to obtain a filtered sub-image; and determining a normalized sub-image based on the filtered sub-image, wherein the normalized sub-image is provided as input to the machine learning model.
  14. 14 . The apparatus of claim 10 , wherein the state of touch indicates an intended amount of applied force associated with the current touch input.
  15. 15 . The apparatus of claim 10 , wherein the continuous touch gesture comprises a force-based press gesture on the capacitive interface.
  16. 16 . A non-transitory computer-readable medium having instructions stored thereon that, when executed by an apparatus, cause the apparatus to perform operations for classifying a detected touch input on a capacitive interface, the operations comprising: obtaining a sequence of capacitive images representing capacitive sensor data associated with touch inputs detected during a continuous touch gesture; and sequentially processing the capacitive images using a machine learning model that includes a trained convolutional neural network layer, wherein for each capacitive image in the sequence, processing the capacitive image using the machine learning model comprises: determining a sub-image of the capacitive image containing a touch region that corresponds to an area of the capacitive interface associated with a current touch input for the capacitive image; performing feature extraction of the sub-image using the trained convolutional neural network layer; and determining a state of touch of the capacitive image based on extracted features of the sub-image and one or more previous capacitive images associated with the continuous touch gesture.
  17. 17 . The non-transitory computer-readable medium of claim 16 , wherein the machine learning model comprises a recurrent neural network layer and wherein determining the state of touch of the capacitive image comprises providing the extracted features and the one or more previous capacitive images as input to the recurrent neural network layer.
  18. 18 . The non-transitory computer-readable medium of claim 16 , wherein determining the sub-image of the capacitive image comprises determining, based on the capacitive sensor data, a region of the capacitive image that is associated with signal strength indicating the detected touch input, wherein performing the feature extraction on the sub-image comprises providing the sub-image as input to the machine learning model.
  19. 19 . The non-transitory computer-readable medium of claim 18 , the operations further comprising: processing the sub-image using a high-pass filter to obtain a filtered sub-image; and determining a normalized sub-image based on the filtered sub-image, wherein the normalized sub-image is provided as input to the machine learning model.
  20. 20 . The non-transitory computer-readable medium of claim 16 , wherein the state of touch indicates an intended amount of applied force associated with the current touch input.

Description

TECHNICAL FIELD The present application relates to human interface devices and, in particular, to methods for classifying touch inputs on a capacitive interface. BACKGROUND Touchscreens are ubiquitous in modern electronic devices. The most common touchscreen technology used today is capacitive sensing. The input panel of a capacitive touchscreen consists of an insulator that is coated with a transparent conductor, such as indium tin oxide. When an input device (e.g., a user's finger, conductive stylus, etc.) touches or is brought near the surface of a capacitive touchscreen, the local electrostatic field is distorted. The resultant change in capacitance can be measured and used to detect the touch input and determine its location on the touchscreen. The touch location data may then be sent to a controller (e.g., a CMOS digital signal processor) for processing. Touch sensing technology that retains the richness of touch behavior information for a capacitive interface may enable greater scope of user interaction. In general, it is desired to increase the dimensionality of touch interaction data beyond traditional touch input parameters, such as two-dimensional touch location, direction of touch gesture, duration of touch, and the like. SUMMARY In an aspect, the present application describes a computer-implemented method for classifying a detected touch input on a capacitive interface. The method may include: obtaining a sequence of capacitive images representing capacitive sensor data associated a continuous touch gesture; and sequentially processing the capacitive images using a machine learning model, wherein for each capacitive image, processing the capacitive image using the machine learning model may include: identifying a touch region of the capacitive image that corresponds to an area of the capacitive interface associated with a current touch input for the capacitive image; extracting a first set of features of the identified touch region using the machine learning model; and determining a state of touch of the capacitive image based on the extracted first set of features and one or more previous capacitive images associated with the continuous touch gesture. The methods described herein may enable classifying the state of touch continuously during a touch instance. In particular, the state of touch may be continuously predicted even when it alternates between multiple different states (e.g., normal touch, heavy press). The state of touch may be predicted based on capacitive image data that is readily available on electronic devices having a capacitive sensor, without resorting to using complex hardware for measuring various touch input parameter values (e.g., pressure, force, etc.). The disclosed methods may also facilitate compensating for missing information when a part of a finger used for a touch input lands outside of a touchscreen. Specifically, partial finger readings may be artificially completed, to make it consistent with complete finger touches that are detected for the touchscreen. For example, readings for a finger touch input detected near an edge of the touchscreen (such that the finger partially lands outside of the touchscreen) may be “completed” based on measured intensity values associated with a known complete finger touch. The artificially completed finger readings may then be used in the (continuous) estimation of state of touch of the touch input. In some implementations, the machine learning model may include a convolutional neural network layer and a recurrent neural network layer. In some implementations, the first set of features may be extracted using the convolutional neural network layer and determining the state of touch of the capacitive image may include providing the extracted first set of features and the one or more previous capacitive images as input to the recurrent neural network layer. In some implementations, identifying the touch region may include determining, based on the capacitive sensor data, a region of the capacitive image that is associated with signal strength indicating the detected touch input. In some implementations, the method may further include determining a sub-image of the capacitive image that contains the touch region, and extracting the first set of features may include providing the sub-image as input to the machine learning model. In some implementations, the method may further include: processing the sub-image using a high-pass filter to obtain a filtered sub-image; and determining a normalized sub-image based on the filtered sub-image, wherein the normalized sub-image may be provided as input to the machine learning model. In some implementations, the state of touch may indicate an intended amount of applied force associated with the current touch input. In some implementations, the state of touch may comprise one of: a heavy press; or a normal touch. In some implementations, the continuous touch gesture may comprise a force-based pre