Search

US-20260126951-A1 - Neurostimulation Systems and Methods

US20260126951A1US 20260126951 A1US20260126951 A1US 20260126951A1US-20260126951-A1

Abstract

The present application discloses and describes neurostimulation systems and methods that include, among other features, (i) neural stimulation through audio with dynamic modulation characteristics, (ii) audio content serving and creation based on modulation characteristics, (iii) extending audio tracks while avoiding audio discontinuities, and (iv) non-auditory neurostimulation and methods, including non-auditory neurostimulation for anesthesia recovery.

Inventors

  • Daniel Clark
  • Kevin J.P. WOODS

Assignees

  • BRAINFM, INC.

Dates

Publication Date
20260507
Application Date
20251006

Claims (20)

  1. 1 . Tangible, non-transitory computer-readable media comprising program instructions, wherein the program instructions, when executed by one or more processors, cause a computing system to perform functions comprising: segmenting an audio track into a plurality of audio segments, wherein the audio track has an original playback duration; determining similarities between individual audio segments of the plurality of audio segments; while the audio track is playing, extending playback of the audio track beyond the original playback duration in an ongoing manner by playing audio segments in a non-sequential order based at least in part on one or more similarities between audio segments, wherein extending playback of the audio track beyond the original playback duration comprises: while a first audio segment of the audio track is playing, crossfading from the first audio segment to a second audio segment of the audio track based at least in part on one or more similarities between the first audio segment and the second audio segment.
  2. 2 . The tangible, non-transitory computer-readable media of claim 1 wherein the second audio segment occurs one of (i) earlier in the audio track than the first audio segment or (ii) later in the audio track than the first audio segment.
  3. 3 . The tangible, non-transitory computer-readable media of claim 1 , wherein the functions further comprise: extracting multi-dimensional features from the audio track, wherein the multi-dimensional features comprises one or more of a spectrogram, a cochleagram, an amplitude modulation spectrum, or mel-frequency cepstral coefficients, and wherein segmenting the audio track comprises segmenting based at least in part on the extracted multi-dimensional features.
  4. 4 . The tangible, non-transitory computer-readable media of claim 1 , wherein segmenting the audio track into a plurality of audio segments comprises: determining a segment size based on one or more temporal aspects of the audio track wherein determining the segment size comprises: determining a beat structure of the audio track; and selecting the segment size to correspond to a predetermined number of beats.
  5. 5 . The tangible, non-transitory computer-readable media of claim 1 , wherein crossfading from the first audio segment to the second audio segment comprises: crossfading over a crossfade period between about 10 milliseconds to about 500 milliseconds.
  6. 6 . The tangible, non-transitory computer-readable media of claim 1 , wherein the functions further comprise: continuing to extend playback of the audio track beyond the original playback duration in an ongoing manner by repeatedly playing audio segments in a non-sequential order based at least in part on one or more similarities between audio segments.
  7. 7 . The tangible, non-transitory computer-readable media of claim 1 , wherein determining similarities between individual audio segments of the plurality of audio segments comprises: extracting multi-dimensional features from individual audio segments; for individual audio segments, comparing a feature vector corresponding to the individual audio segment with feature vectors corresponding to other audio segments in the plurality of audio segments; generating a self-similarity matrix based on the comparisons that represents similarities between the individual audio segments; and selecting the first audio segment and the second audio segment for crossfading based on values in the self-similarity matrix.
  8. 8 . The tangible, non-transitory computer-readable media of claim 1 , wherein extending playback of the audio track beyond the original playback duration further comprises: modulating one or more frequency bands of the audio track according to a stimulation protocol, wherein the stimulation protocol specifies one or more of modulation rate, phase, depth, or waveform shape.
  9. 9 . The tangible, non-transitory computer-readable media of claim 1 , wherein the audio track is a first audio track in a group of two or more audio tracks comprising the first audio track and a second audio track, and wherein extending playback of the audio track beyond the original playback duration: crossfading from an audio segment of the first audio track to an audio segment of the second audio track based at least in part on one or more similarities between the audio segment of the first audio track and the audio segment of the second audio track.
  10. 10 . The tangible, non-transitory computer-readable media of claim 1 , wherein extending playback of the audio track beyond the original playback duration comprises: continuing to play audio segments in a non-sequential order until receiving one of (i) an indication that a pre-configured playback duration has elapsed or (ii) a command to stop playback.
  11. 11 . A method performed by a computing system, wherein the method comprises: segmenting an audio track into a plurality of audio segments, wherein the audio track has an original playback duration; determining similarities between individual audio segments of the plurality of audio segments; while the audio track is playing, extending playback of the audio track beyond the original playback duration in an ongoing manner by playing audio segments in a non-sequential order based at least in part on one or more similarities between audio segments, wherein extending playback of the audio track beyond the original playback duration comprises: while a first audio segment of the audio track is playing, crossfading from the first audio segment to a second audio segment of the audio track based at least in part on one or more similarities between the first audio segment and the second audio segment.
  12. 12 . The method of claim 11 , wherein the second audio segment occurs one of (i) earlier in the audio track than the first audio segment or (ii) later in the audio track than the first audio segment.
  13. 13 . The method of claim 11 , wherein the method further comprises: extracting multi-dimensional features from the audio track, wherein the multi-dimensional features comprises one or more of a spectrogram, a cochleagram, an amplitude modulation spectrum, or mel-frequency cepstral coefficients, and wherein segmenting the audio track comprises segmenting based at least in part on the extracted multi-dimensional features.
  14. 14 . The method of claim 11 , wherein segmenting the audio track into a plurality of audio segments comprises: determining a segment size based on one or more temporal aspects of the audio track, wherein determining the segment size comprises: determining a beat structure of the audio track; and selecting the segment size to correspond to a predetermined number of beats.
  15. 15 . The method of claim 11 , wherein crossfading from the first audio segment to the second audio segment comprises: crossfading over a crossfade period between about 10 milliseconds to about 500 milliseconds.
  16. 16 . The method of claim 11 , wherein the method further comprises: continuing to extend playback of the audio track beyond the original playback duration in an ongoing manner by repeatedly playing audio segments in a non-sequential order based at least in part on one or more similarities between audio segments.
  17. 17 . The method of claim 11 , wherein determining similarities between individual audio segments of the plurality of audio segments comprises: extracting multi-dimensional features from individual audio segments; for individual audio segments, comparing a feature vector corresponding to the individual audio segment with feature vectors corresponding to other audio segments in the plurality of audio segments; generating a self-similarity matrix based on the comparisons that represents similarities between the individual audio segments; and selecting the first audio segment and the second audio segment for crossfading based on values in the self-similarity matrix.
  18. 18 . The method of claim 11 , wherein extending playback of the audio track beyond the original playback duration further comprises: modulating one or more frequency bands of the audio track according to a stimulation protocol, wherein the stimulation protocol specifies one or more of modulation rate, phase, depth, or waveform shape.
  19. 19 . The method of claim 11 , wherein the audio track is a first audio track in a group of two or more audio tracks comprising the first audio track and a second audio track, and wherein extending playback of the audio track beyond the original playback duration: crossfading from an audio segment of the first audio track to an audio segment of the second audio track based at least in part on one or more similarities between the audio segment of the first audio track and the audio segment of the second audio track.
  20. 20 . The method of claim 11 , wherein extending playback of the audio track beyond the original playback duration comprises: continuing to play audio segments in a non-sequential order until receiving one of (i) an indication that a pre-configured playback duration has elapsed or (ii) a command to stop playback.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 17/856,617 titled “Neurostimulation Systems and Methods,” filed Jul. 1, 2022, and issued as U.S. Pat. No. 12,436,729 on Oct. 7, 2025; U.S. application Ser. No. 17/856,617 is a continuation-in-part of: (i) U.S. application Ser. No. 17/366,896, titled “Neural Stimulation Through Audio With Dynamic Modulation Characteristics,” filed on Jul. 2, 2021, and currently pending; (ii) U.S. application Ser. No. 17/505,453, titled “Audio Content Serving And Creation Based On Modulation Characteristics,” filed on Oct. 19, 2021, and currently pending; (iii) U.S. application Ser. No. 17/556,583, titled “Extending Audio Tracks While Avoiding Audio Discontinuities,” filed on Dec. 20, 2021, and currently pending; and (iv) U.S. application Ser. No. 17/804,407, titled “Neurostimulation And Methods For Anesthesia Recovery,” filed May 27, 2022, and currently pending, which claims priority to U.S. Prov. App. 63/268,168 titled “Perioperative Functional Audio for Anxiety and Cognitive Recovery From Anesthesia,” filed on Feb. 17, 2022, and currently pending. The entire contents of U.S. application Ser. Nos. 17/856,617; 17/366,896; 17/505,453; 17/556,583; 17/804,407; and 63/268,168 are incorporated by reference herein. This application also incorporates by reference the entire contents of: (i) U.S. application Ser. No. 11/251,051, titled “Method for incorporating brain wave entrainment into sound production,” filed on Oct. 14, 2005, and issued on Mar. 9, 2010, as U.S. Pat. No. 7,674,224; (ii) U.S. application Ser. No. 15/857,065, titled “Method to increase quality of sleep with acoustic intervention,” filed Dec. 28, 2017, and issued on May 19, 2020, as U.S. Pat. No. 10,653,857; and (iii) U.S. application Ser. No. 16/276,961, titled “Noninvasive neural stimulation through audio,” filed Feb. 15, 2019, and issued on Dec. 21, 2021, as U.S. Pat. No. 11,205,414. OVERVIEW For decades, neuroscientists have observed wave-like activity in the brain called neural oscillations. Various aspects of these oscillations have been related to mental states including alertness, attention, relaxation, and sleep. The ability to effectively induce and modify such mental states by noninvasive brain stimulation through one or more modalities (e.g., audio and non-audio) is desirable. The present disclosure relates to neurostimulation systems and methods that include, among other features, (i) neural stimulation through audio with dynamic modulation characteristics, (ii) audio content serving and creation based on modulation characteristics, (iii) extending audio tracks while avoiding audio discontinuities, and (iv) non-auditory neurostimulation and methods, including non-auditory neurostimulation methods for anesthesia recovery. Accordingly, some aspects of the present disclosure relate to neural stimulation, particularly, noninvasive neural stimulation using audio and several features and techniques related thereto. Further aspects of the present disclosure relate to neural stimulation, particularly, noninvasive neural stimulation using one or more of auditory and non-auditory sensory modalities such that multi-modal entrainment may be used to increase the benefit of neurological stimulation. Additionally, this disclosure also describes a novel use of sensory neuromodulation for recovery from anesthesia. BRIEF DESCRIPTION OF THE DRAWINGS Other objects and advantages of the present disclosure will become apparent to those skilled in the art upon reading the following detailed description of exemplary embodiments and appended claims, in conjunction with the accompanying drawings, in which like reference numerals have been used to designate like elements, and in which: FIG. 1A depicts a flow diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 1B depicts a flow diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 2 depicts a process flowchart according to an example embodiment of the present disclosure; FIG. 3 depicts a process flowchart according to an example embodiment of the present disclosure; FIG. 4 depicts a flow diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 5 depicts a waveform of an audio track overlaid with its analyzed modulation depth trajectory according to an example embodiment of the present disclosure; FIG. 6 depicts a process diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 7A depicts a process diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 7B depicts a process diagram of an illustrative method according to an example embodiment of the present disclosure; FIG. 8 depicts a flow diagram of an illustrative method of extending an audio track, according to some embodiments of the present disclosu