Search

US-20260124540-A1 - TECHNIQUES FOR ASSISTED GAMEPLAY USING GEOMETRIC FEATURES

US20260124540A1US 20260124540 A1US20260124540 A1US 20260124540A1US-20260124540-A1

Abstract

The techniques described herein include using a system for enabling assisted gameplay in a computer game using real-time detection of predefined scene features and mapping of the detected features to recommended actions. For example, the system may generate a scanning query (e.g., a segment cast) toward a target area within a virtual scene, determine a geometric feature based on the scanning query, determine a scene feature based on the geometric feature, determine an action associated with the scene feature, and control an avatar based on the action. Examples of scene features that may have mappings to recommended actions include obstacles within a predicted trajectory of the avatar and transitions in the ground level of the virtual scene.

Inventors

  • Joakim Hagdahl
  • Daniel Herdman

Assignees

  • ELECTRONIC ARTS INC.

Dates

Publication Date
20260507
Application Date
20251230

Claims (20)

  1. 1 . A system, comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: generating a first scanning query toward a first target area within a virtual scene, wherein generating the first scanning query comprises casting a plane having two or more dimensions; determining, based on the first scanning query, a first geometric feature associated with the first target area; determining, based on the first geometric feature, that the first target area comprises a first predefined scene feature; and based on determining that the first target area comprises the first predefined scene feature, controlling an avatar in the virtual scene based at least in part on a first action associated with the first predefined scene feature.
  2. 2 . The system of claim 1 , wherein: the first scanning query comprises a segment cast associated with a plurality of dimensions, and the first target area is determined based on the plurality of dimensions and in relation to a vantage point associated with the virtual scene.
  3. 3 . The system of claim 2 , wherein the segment cast is a two-dimensional segment cast with a time dimension and a half axis dimension.
  4. 4 . The system of claim 2 , wherein the segment cast is a three-dimensional segment cast with a time dimension, a half axis dimension, and a height extrusion axis dimension.
  5. 5 . The system of claim 1 , the operations further comprising: determining a first line segment based on the plane, wherein the first line segment is associated with a collision between the plane and an object in the virtual scene; and determining the first geometric feature based on the first line segment.
  6. 6 . The system of claim 1 , wherein: the first predefined scene feature comprises at least one of a first obstacle within a predicted trajectory of the avatar or a first transition in a ground level of the virtual scene, the first target area comprises at least a portion of a line of sight of the avatar when the line of sight is substantially parallel to the ground level, the first predefined scene feature comprises the first obstacle; and controlling the avatar based on the first action comprises automatically causing the avatar to transition to a posture that is configured to avoid collision between the avatar and the first obstacle.
  7. 7 . The system of claim 1 , the operations further comprising: generating a second scanning query toward a second target area within the virtual scene; determining, based on the second scanning query, a second geometric feature associated with the second target area; determining, based on the second geometric feature, that the second target area comprises a second predefined scene feature; and based on determining that the second target area comprises the second predefined scene feature, controlling an avatar in the virtual scene based at least in part on a second action associated with the second predefined scene feature.
  8. 8 . The system of claim 7 , wherein: the first predefined scene feature comprises at least one of a first obstacle within a predicted trajectory of the avatar or a first transition in a ground level of the virtual scene, the second target area is determined based on a region that comprises at least a portion of the ground level, the second predefined scene feature comprises a first transition in a ground level of the virtual scene, and controlling the avatar based on the second action comprises automatically adjusting an orientation of the avatar to adjust an effect of the first transition on a direction of movement associated with the avatar.
  9. 9 . The system of claim 7 , wherein: the first scanning query is generated by a first execution thread, and the second scanning query is generated by a second execution thread that is executed in parallel with the first execution thread.
  10. 10 . The system of claim 7 , wherein the second predefined scene feature comprises at least one of a staircase, a hill, or a downhill.
  11. 11 . The system of claim 1 , wherein the first predefined scene feature comprises at least one of a first obstacle within a predicted trajectory of an avatar in the virtual scene or a first transition in a ground level of the virtual scene.
  12. 12 . A computer-implemented method comprising: generating, by a processor, a first scanning query toward a first target area within a virtual scene, wherein generating the first scanning query comprises casting a plane having two or more dimensions; determining, by the processor and based on the first scanning query, a first geometric feature associated with the first target area; determining, by the processor and based on the first geometric feature, that the first target area comprises a first predefined scene feature; and based on determining that the first target area comprises the first predefined scene feature, controlling an avatar in the virtual scene based at least in part on a first action associated with the first predefined scene feature.
  13. 13 . The computer-implemented method of claim 12 , wherein: the first scanning query comprises a segment cast associated with a plurality of dimensions, and the first target area is determined based on the plurality of dimensions and in relation to a vantage point associated with the virtual scene.
  14. 14 . The computer-implemented method of claim 13 , wherein the segment cast is a two-dimensional segment cast with a time dimension and a half axis dimension.
  15. 15 . The computer-implemented method of claim 13 , wherein the segment cast is a three-dimensional segment cast with a time dimension, a half axis dimension, and a height extrusion axis dimension.
  16. 16 . The computer-implemented method of claim 12 , further comprising: determining a first line segment based on the plane, wherein the first line segment is associated with a collision between the plane and an object in the virtual scene; and determining the first geometric feature based on the first line segment.
  17. 17 . One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: generating a first scanning query toward a first target area within a virtual scene, wherein generating the first scanning query comprises casting a plane having two or more dimensions; determining, based on the first scanning query, a first geometric feature associated with the first target area; determining, based on the first geometric feature, that the first target area comprises a first predefined scene feature, wherein the first predefined scene feature comprises at least one of a first obstacle within a predicted trajectory of an avatar in the virtual scene or a first transition in a ground level of the virtual scene; and based on determining that the first target area comprises the first predefined scene feature, controlling the avatar in the virtual scene based at least in part on a first action associated with the first predefined scene feature.
  18. 18 . The one or more non-transitory computer-readable media of claim 17 , wherein: the first scanning query comprises a segment cast associated with a plurality of dimensions, and the first target area is determined based on the plurality of dimensions and in relation to a vantage point associated with the virtual scene.
  19. 19 . The one or more non-transitory computer-readable media of claim 18 , wherein the segment cast is a three-dimensional segment cast with a time dimension, a half axis dimension, and a height extrusion axis dimension.
  20. 20 . The one or more non-transitory computer-readable media of claim 17 , the operations further comprising: determining a first line segment based on the plane, wherein the first line segment is associated with a collision between the plane and an object in the virtual scene; and determining the first geometric feature based on the first line segment.

Description

RELATED APPLICATION This application is a continuation of U.S. patent application Ser. No. 18/194,328, filed on Mar. 31, 2023, entitled “TECHNIQUES FOR ASSISTED GAMEPLAY USING GEOMERIC FEATURES” by Joakim Hagdahl, et al., the contents of which are incorporated by reference herein. BACKGROUND Computer games have become increasingly popular over the past few decades, with millions of players worldwide enjoying a variety of games across different platforms. As the complexity and realism of computer games have increased, so have the challenges faced by players in navigating and interacting with virtual environments. Players often encounter obstacles and hazards that require quick reflexes and accurate judgment to overcome, leading to frustration and dissatisfaction. There is a need for systems that efficiently and effectively provide real-time assistance to players, enabling them to make better decisions and achieve their in-game objectives more efficiently. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items. FIG. 1 illustrates a schematic diagram of an example environment with game system(s) and game client device(s). FIG. 2 is a flowchart diagram of an example process for controlling a player avatar based on received scene data for a virtual scene. FIGS. 3A-3B provide an operational example of detecting a scene feature that corresponds to a head-level obstacle. FIG. 4 provides an operational example of detecting a scene feature that corresponds to a ramp. FIG. 5 provides an operational example of detecting a scene feature that corresponds to a stairstep. FIG. 6 provides an operational example of detecting a scene feature that corresponds to a skating bowl. FIG. 7 illustrates a block diagram of example game system(s) that may provide assisted gameplay in accordance with examples of the disclosure. DETAILED DESCRIPTION Example embodiments of this disclosure describe methods, apparatuses, computer-readable media, and system(s) for enabling assisted gameplay for a computer game. More particularly, example methods, apparatuses, computer-readable media, and system(s) according to this disclosure may allow real-time detection of predefined scene features, mapping of the detected scene features to recommended actions, and controlling player avatars based on the recommended actions. For example, an example system (e.g., a game system or a game client device) can generate a scanning query (e.g., a segment cast) toward a target area within a virtual scene, determine a geometric feature based on the scanning query, determine a scene feature based on the geometric feature, determine an action associated with the scene feature, and control an avatar based on the action. A geometric feature may include a shape of an object, a concave transition in a ground level of the virtual scene, a convex transition in the ground level, and a step-wise transition in the ground level. While the present disclosure provides examples of geometric features and example embodiments that determine techniques for assisted gameplay using geometric features, the examples are provided for illustrative purposes only and do not define or narrow claim scope. Examples of scene features that may have mappings to recommended actions include obstacles in a region within a predicted trajectory of the avatar and transitions in the ground level of the virtual scene. In some cases, the techniques described herein relate to using a scanning query to determine a geometric feature in a virtual scene. A scanning query may be any computer graphics operation configured to determine at least one geometric feature associated with an object in a target area of the virtual scene. Examples of scanning queries include a ray cast, a query that includes a collection of ray casts, and a segment cast. A ray cast may represent a ray in the virtual scene cast from an initial point in the virtual scene as a straight line with a particular direction. Once cast, the ray cast may return the coordinates associated with the first intersection of the ray cast with an object in the virtual scene. In some cases, because a ray cast includes a single line and can thus represent a single intersection point, the ray cast is not a good tool for determining geometric features in the virtual scene. The example system may use at least one of a collection of ray casts or a segment cast to address the shortcomings associated with detecting geometric features using a ray cast. In some cases, the example system may cast a collection of rays, each returning a different intersection point. Because a collection of ray casts returns more intersection points than a single ray cast, the output of the collection is likely to generat