US-12625592-B2 - 3D document editing system
Abstract
A 3D document editing system and graphical user interface (GUI) that includes a virtual reality and/or augmented reality device and an input device (e.g., keyboard) that implements sensing technology for detecting gestures by a user. Using the system, portions of a document can be placed at or moved to various Z-depths in a 3D virtual space provided by the VR device to provide 3D effects in the document. The sensing technology may allow the user to make gestures while entering text via a keypad, thus allowing the user to specify 3D effects in the document while typing. The system may also monitor entries made using the keypad, apply rules to the entries to detect particular types of entries such as URLs, and automatically shift the detected types of entries forward or backward on the Z axis relative to the rest of the content in the document.
Inventors
- Seung Wook Kim
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20201023
Claims (17)
- 1 . A system, comprising: an electronic device configured to: display a document including text in a 3 D virtual space for viewing by a user; monitor text input for a string of characters from an input device configured to receive the text input for the string from a user, wherein said monitor applies a rule, of one or more rules, to recognize, based on identification of the string as containing a rule string, the string as a particular type of text input, and wherein the rule specifies two or more selected from: a start for movement of the string of the particular type from the input device, a length of movement to move the string of the particular type, and a direction of movement to move the string of the particular type; detect, based on said monitor, that the text input comprises the particular type of text input specified by the rule; and move, based upon said detect that the text input from the input device comprises the particular type of text input specified by the rule, at least a portion of the document that includes the text input in accordance with the two or more selected from the start for movement, the length of movement, and the direction of movement specified by the rule.
- 2 . The system of claim 1 , wherein to move at least the portion of the document, the electronic device is further configured to: add 3D effects to at least the portion of the document based at least in part on the one or more rules.
- 3 . The system of claim 1 , wherein the electronic device is further configured to: determine whether to move a text area or a text portion of the document that includes the text input based on the one or more rules.
- 4 . The system of claim 3 , wherein the text area is one of a paragraph, a column, a section, a text field, or a text box.
- 5 . The system of claim 3 , wherein the text portion is one of a word, a sentence, a title, a heading, or a uniform resource locator (URL).
- 6 . The system of claim 1 , further comprising the input device, wherein the input device comprises a keyboard or a keypad, and wherein the electronic device comprises an augmented reality device, a virtual reality device, or a mixed reality device.
- 7 . A device, comprising: a controller; and a projector configured to display a document including text in a 3D virtual space for viewing by a user under control of the controller, wherein the controller is configured to: monitor text input for a string of characters from an input device, wherein said monitor applies a rule, of one or more rules, to recognize, based on identification of the string as containing a rule string, the string as a particular type of text input, and wherein the rule of the one or more rules specifies two or more selected from: a start for movement of the string of the particular type from the input device, a length of movement to move the string of the particular type, and a direction of movement to move the string of the particular type; detect, based on said monitor, that the text input comprises the particular type of text input specified by the rule; and move, based upon said detect that the text input comprises the particular type of text input specified by the rule, at least a portion of the document that includes the text input in accordance with the two or more selected from the start for movement, the length of movement, and the direction of movement specified by the rule.
- 8 . The device of claim 7 , wherein to move at least the portion of the document, the controller is further configured to: add 3D effects to at least the portion of the document based at least in part on the one or more rules.
- 9 . The device of claim 7 , wherein the controller is further configured to: determine whether to move a text area or a text portion of the document that includes the text input based on the one or more rules.
- 10 . The device of claim 9 , wherein the text area is one of a paragraph, a column, a section, a text field, or a text box.
- 11 . The device of claim 9 , wherein the text portion is one of a word, a sentence, a title, a heading, or a uniform resource locator (URL).
- 12 . The device of claim 7 , further comprising the input device, wherein the input device comprises a keyboard or a keypad.
- 13 . A method, comprising: performing, by an electronic device: displaying a document including text in a 3D virtual space for viewing by a user; monitoring received text input for a string of characters via an input device, wherein said monitor applies a rule one or more rules to recognize, based on identification of the string as containing a rule string, the string as a particular type of text input, and wherein the rule specifies two or more selected from: a start for movement of the string of the particular type from the input device, a length of movement to move the string of the particular type, and a direction of movement to move the string of the particular type; detecting, based on said monitoring, that the text input comprises the particular type of text input specified by the rule; and moving, based upon said detecting that the text input from the user comprises the particular type of text input specified by the rule, at least a portion of the document that includes the text input in accordance with the two or more selected from the start for movement, the length of movement, and the direction of movement specified by the rule.
- 14 . The method of claim 13 , further comprising: adding 3D effects to at least the portion of the document based at least in part on the one or more rules.
- 15 . The method of claim 13 , further comprising: determining whether to move a text area or a text portion of the document that includes the text input based on the one or more rules.
- 16 . The method of claim 15 , wherein the text area is one of a paragraph, a column, a section, a text field, or a text box.
- 17 . The method of claim 15 , wherein the text portion is one of a word, a sentence, a title, a heading, or a uniform resource locator (URL).
Description
BACKGROUND This application is a continuation of U.S. patent application Ser. No. 15/271,196, filed on Sep. 20, 2016, which is hereby incorporated by referenced herein its entirety. Conventional graphical user interfaces (GUIs) for text generation and editing systems work in a two-dimensional (2D) space (e.g., a 2D screen or page on a screen). Highlighting areas or portions of text using these GUIs typically involves adding some effect in 2D such as bold or italics text, underlining, or coloring. Virtual reality (VR) allows users to experience and/or interact with an immersive artificial three-dimensional (3D) environment. For example, VR systems may display stereoscopic scenes to users in order to create an illusion of depth, and a computer may adjust the scene content in real-time to provide the illusion of the user interacting within the scene. Similarly, augmented reality (AR) and mixed reality (MR) combine computer generated information with views of the real world to augment, or add content to, a user's view of their environment. The simulated environments of VR and/or the enhanced content of AR/MR may thus be utilized to provide an interactive user experience for multiple applications, such as interacting with virtual training environments, gaming, remotely controlling drones or other mechanical systems, viewing digital media content, interacting with the internet, or the like. Conventional VR, AR, and MR systems may allow content consumers to view and interact with content in a 3D environment. Conventional VR systems may provide tools and applications that allow VR content creators to create and edit 3D objects, and may provide a text generation and editing system with a conventional 2D GUI that allows content creators to generate text content that can be attached to 3D objects. However, these conventional VR systems typically do not provide text generation and editing systems with GUIs that allow content creators to generate and edit text with 3D effects in a VR 3D environment. SUMMARY Various embodiments of methods and apparatus for generating and editing documents with three-dimensional (3D) effects for text content in a 3D virtual view space. Embodiments of 3D document editing systems, methods, and graphical user interfaces (GUIs) are described that may include a virtual reality (VR) device such as a VR headset, helmet, goggles or glasses for displaying documents in a 3D virtual space, and an input device (e.g., keyboard) for entering and editing text or other content in the documents and that includes sensing technology for detecting gestures by the user. The VR device and input device may be coupled via a wired or wireless (e.g., Bluetooth) connection. The VR device may be configured to display a 3D text generation and editing GUI in a virtual space that includes a virtual screen for entering or editing text in documents via a keypad of the input device. Unlike conventional 2D graphical user interfaces, using embodiments of the 3D document editing system, a text area or text field of a document can be placed at or moved to various Z-depths in the 3D virtual space. The input device (e.g., keyboard) may include sensing technology, for example a motion, touch, and/or pressure sensing region or area on a keyboard, for detecting a user's gestures, for example motions of the user's thumbs when on or near the sensing region. The VR device may detect gestures made by the user via the sensing technology, and in response may move selected content in a document (e.g., words, paragraphs, sections, columns, sentences, text boxes, uniform resource locators (URLs) or other active text, etc.) forward or backward on a Z axis in the 3D virtual space relative to the rest of the document according to the detected gestures. The sensing technology of the keyboard may be configured to allow the user to make the gestures while entering text via the keypad, thus allowing the user to provide the 3D effects to text while typing. In some embodiments, the gestures may include a gesture (e.g., moving both thumbs down on a sensing region of a keyboard) to move an area of the document (e.g., a paragraph, section, column, text field, text box, etc.) forward on the Z axis in 3D space relative to the document. In some embodiments, the gestures may include a gesture (e.g., moving both thumbs up on the sensing region) to move an area of the document backward on the Z axis in 3D space relative to the document. In some embodiments, the gestures may include a gesture (e.g., moving one thumb down on the sensing region) to move a portion of text in the document (e.g., a uniform resource locator (URL), sentence, word, title or heading, etc.) forward on the Z axis in 3D space relative to other content of the document. In some embodiments, the gestures may include a gesture (e.g., moving one thumb up on the sensing region) to move a portion of text in the document backward relative to other content of the document. In some embodiments, t