Search

CN-121996057-A - User interaction method, device, equipment, storage medium and product

CN121996057ACN 121996057 ACN121996057 ACN 121996057ACN-121996057-A

Abstract

The application discloses a user interaction method, a device, equipment, a storage medium and a product. The method comprises the steps of obtaining a plurality of first video files corresponding to a plurality of travelling routes one by one, playing a second video file on a display terminal under the condition that selection instructions sent by the first user are received, wherein the second video file is selected from the plurality of first video files through the selection instructions, obtaining user information of the second user in the visible range of the first user by utilizing the current video frame in the second video file, the second user is a user playing other video files except the second video file in the plurality of first video files at the same time, and displaying a virtual avatar of the second user on the display terminal according to the user information so that the first user interacts with the second user through the virtual avatar. The application can display other users in the picture in the process of using VR technology by the users, thereby realizing interaction between the users and the picture.

Inventors

  • GUO YANGYONG
  • LIAO CHANGJUN
  • Qiu Changdong
  • LIU WEN
  • LI DA

Assignees

  • 中移(杭州)信息技术有限公司
  • 中国移动通信集团有限公司

Dates

Publication Date
20260508
Application Date
20241105

Claims (12)

  1. 1. A method of user interaction, the method comprising: Acquiring a plurality of first video files corresponding to a plurality of travel routes one by one; Under the condition that a selection instruction sent by a first user is received, playing a second video file on a display terminal, wherein the second video file is a video file selected from the plurality of first video files through the selection instruction; Determining a second user in the visual range of the first user by utilizing the currently played video frame in the second video file, wherein the second user is a user playing any video file in the plurality of first video files simultaneously with the first user; and displaying the virtual avatar of the second user on the display terminal so that the first user interacts with the second user through the virtual avatar.
  2. 2. The user interaction method of claim 1, wherein the video frame comprises a first virtual longitude and a first virtual latitude of the first user; The determining, by using the currently played video frame in the second video file, a second user within the visual range of the first user includes: acquiring a third user which plays any video file in the plurality of first video files simultaneously; Determining a third user located within the visual range as the second user using the second virtual longitude, the second virtual latitude, and the first virtual longitude and the first virtual latitude of the first user; The visual range is represented by a first formula; the first formula is: A=(|x-x0|<m)∩(|y-y0|<m) wherein A is a visual range, x is a second virtual longitude, x0 is a first virtual longitude, y is a second virtual latitude, y0 is a second virtual latitude, and m is a preset viewing distance of the first user.
  3. 3. The user interaction method of claim 1, wherein the displaying the virtual avatar of the second user on the display terminal comprises: acquiring a first coordinate of the second user in a three-dimensional coordinate system according to the video frame and the user information of the second user, wherein the first coordinate is a three-dimensional pixel coordinate, and the three-dimensional coordinate system is constructed based on the second video file; mapping the first coordinate to a second coordinate in a playing picture of the display terminal, wherein the second coordinate is a two-dimensional pixel coordinate; Acquiring a pixel distance between the second user and the first user according to the user information; And displaying the virtual avatar at the second coordinate by using the pixel distance, wherein the display terminal comprises the second coordinate.
  4. 4. The method of claim 3, wherein the user information comprises a third virtual longitude, a third virtual latitude, and a third virtual elevation, wherein the video frame comprises a first virtual longitude, a first virtual latitude, a first virtual elevation, and a heading of the first user; The step of obtaining the first coordinate of the second user in the three-dimensional coordinate system according to the user information comprises the following steps: acquiring a first coordinate of the second user in a three-dimensional coordinate system according to a second formula; The second formula is: Wherein x is a third virtual longitude, y is a third virtual latitude, h is a third virtual altitude, x0 is a first virtual longitude, y0 is a first virtual latitude, h0 is a first virtual altitude, γ is an angle between the first virtual longitude and the advancing direction, and (u 0, v0, z 0) is a first coordinate.
  5. 5. The method of claim 3, wherein mapping the first coordinate to a second coordinate in a play frame of the display terminal comprises: Acquiring a first visual angle rotation angle and a second visual angle rotation angle of the first user, wherein the first visual angle rotation angle is a left-right rotation angle of the first user in the three-dimensional pixel coordinate system, and the second visual angle rotation angle is an up-down rotation angle of the first user in the three-dimensional pixel coordinate system; Rotating the first coordinate by the rotation angle of the first visual angle to obtain a third coordinate; rotating the third coordinate at the second visual angle rotation angle to obtain a fourth coordinate; And mapping the fourth coordinate into a second coordinate in the playing picture by using a preset focal length and a preset height, wherein the focal length is the distance between the playing picture and the camera device for recording the first video file, and the height is the height between the camera device and the ground plane.
  6. 6. The method of claim 5, wherein mapping the fourth coordinate to the second coordinate in the play frame comprises: mapping the fourth coordinate to a second coordinate in the playing picture by using a third formula; The third formula is: wherein, (u 2, v2, z 2) is the fourth coordinate, (u, v) is the second coordinate, D is the focal length, zc is the height.
  7. 7. The method of user interaction of claim 3, wherein the displaying the avatar at the second coordinate using the pixel distance comprises: And adjusting the pixel size of the virtual avatar at the second coordinate by using the pixel distance, and adjusting the display effect of the virtual avatar at the second coordinate by using the traveling direction and speed of the second user.
  8. 8. The method of claim 1, wherein the obtaining a plurality of first video files corresponding to a plurality of travel routes one-to-one includes: And recording any travelling route by using an imaging device, and recording the longitude, latitude, altitude and travelling direction corresponding to each frame in the recording process to obtain the first video file.
  9. 9. A user interaction device, the device comprising: The first acquisition module is used for acquiring a plurality of first video files corresponding to the travel routes one by one; the playing module is used for playing a second video file on the display terminal under the condition that a selection instruction sent by a first user is received, wherein the second video file is a video file selected from the plurality of first video files through the selection instruction; the second acquisition module is used for determining a second user in the visual range of the first user by utilizing the currently played video frame in the second video file, wherein the second user is a user playing any video file in the plurality of first video files simultaneously with the first user; And the display module is used for displaying the virtual avatar of the second user on the display terminal so that the first user interacts with the second user through the virtual avatar.
  10. 10. A terminal device comprising a processor and a memory storing computer program instructions; The processor, when executing the computer program instructions, implements a user interaction method as claimed in any one of claims 1-8.
  11. 11. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon computer program instructions, which when executed by a processor, implement the user interaction method of any of claims 1-8.
  12. 12. A computer program product, characterized in that the computer product comprises a computer program which, when executed by a processor, implements the user interaction method of any one of claims 1-8.

Description

User interaction method, device, equipment, storage medium and product Technical Field The application belongs to the technical field of computer text processing, and particularly relates to a user interaction method, device, equipment, storage medium and product. Background Based on the experience mode of playing video pictures at the speed of a user running machine and a spinning, the method is widely applied to various running and riding body-building scenes. With the rise of the metauniverse concept heat in recent years, hardware devices such as VR glasses are regarded as the key of a metauniverse interaction portal, and brand-new sensory experience between deficiency and excess can be brought to users, so that immersive running and riding body-building applications based on the VR glasses are also endless. The video which is currently applied to synchronous playing of the running machine and the spinning mainly comprises two types of live-action shooting pictures and 3D modeling rendering pictures. The manufacturing cost of high-definition videos and 360 panoramic videos recorded by using a video camera in the current mainstream technology is relatively low, and the user can be immersed in the real environment by combining 3D glasses, but the playing process is only related to the riding and running speeds of the user, and the user usually only watches one video independently in the riding and running processes and cannot interact with pictures too much. Disclosure of Invention The embodiment of the application provides a user interaction method, device, equipment, storage medium and product, which are used for solving the problem that a user cannot interact with a picture in the process of using VR technology. In a first aspect, an embodiment of the present application provides a user interaction method, where the method includes: Acquiring a plurality of first video files corresponding to a plurality of travel routes one by one; Under the condition that a selection instruction sent by a first user is received, playing a second video file on a display terminal, wherein the second video file is a video file selected from the plurality of first video files through the selection instruction; Determining a second user in the visual range of the first user by utilizing the currently played video frame in the second video file, wherein the second user is a user playing any video file in the plurality of first video files simultaneously with the first user; and displaying the virtual avatar of the second user on the display terminal so that the first user interacts with the second user through the virtual avatar. In a second aspect, an embodiment of the present application provides a user interaction device, including: The first acquisition module is used for acquiring a plurality of first video files corresponding to the travel routes one by one; the playing module is used for playing a second video file on the display terminal under the condition that a selection instruction sent by a first user is received, wherein the second video file is a video file selected from the plurality of first video files through the selection instruction; the second acquisition module is used for determining a second user in the visual range of the first user by utilizing the currently played video frame in the second video file, wherein the second user is a user playing any video file in the plurality of first video files simultaneously with the first user; And the display module is used for displaying the virtual avatar of the second user on the display terminal so that the first user interacts with the second user through the virtual avatar. In a third aspect, an embodiment of the present application provides a terminal device, including a processor and a memory storing computer program instructions; The processor, when executing the computer program instructions, implements the user interaction method as in the first aspect. In a fourth aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a user interaction method as in the first aspect. In a fifth aspect, embodiments of the present application provide a computer program product, instructions in which, when executed by a processor of an electronic device, cause the electronic device to perform the user interaction method as in the first aspect. According to the user interaction method provided by the embodiment of the application, the video files matched with the plurality of travel routes are acquired, so that rich and various visual content selections are provided for the user. When a first user selects and plays a specific video file, the first user can recognize and respond to the selection, and the selected content is displayed on the display terminal immediately, so that the user experience is enhanced. Other user (i.e., second user) informa