Search

US-20260127327-A1 - Quantum Computing Fractal Compression And Decentralized Computing For Predictive Physics Simulation

US20260127327A1US 20260127327 A1US20260127327 A1US 20260127327A1US-20260127327-A1

Abstract

The disclosure relates to methods for predictive physics compression (PPC) for efficient simulations and calculation leveraging techniques described as fractal compression. Unlike traditional calculus-based methods relying on differential equations and related numerical methods, embodiments of the disclosure are directed towards representation and calculation of physical systems as hierarchical frames within a universal grid, where each frame encapsulates properties such as position and velocity. The simulations and calculations progress in discrete time steps recursively traversing the frame hierarchy. Each tick involves actions such as incorporating the previous frame relation's displacement, detecting collisions, applying stochastic movement and attractive force transforms, and limiting displacement based on defined constraints. Embodiments of the disclosure may be implemented on a single device or distributed across a network, with a coordination server distributing frames to client devices for parallel processing.

Inventors

  • Essam Abadir

Assignees

  • Essam Abadir

Dates

Publication Date
20260507
Application Date
20241106

Claims (14)

  1. 1 . A method for simulating a physical system on a computing device having at least one processor, the method comprising: receiving a representation of the physical system, the representation comprising a set of frames wherein each frame comprises: a position within a universal grid; and at least one leaf frame residing at a lowest level in the hierarchy of the simulation; advancing the simulation on a universal tick by tick basis; on each universal tick traversing the hierarchy of frames, level by level calculating frames by performing the following operations: incorporating information on displacement of the previous frame relation, if any, in the traversal hierarchy into the current frame such that the current frame is displaced by the same amount on the universal grid as the previous frame relation prior to the current frame calculating its own additional displacement, detecting collisions between frames at each level, and, for each collided frame pair: ensuring their leaf quanta in the hierarchy are not in the same position in the universal grid; and, mediating leaf quanta collisions by moving one or the other leaf quanta to a non-collided position on the universal grid; applying a stochastic movement transform to displace each frame within the universal grid; applying an attractive force transform to displace the quanta children of each frame towards a point in their parent frame; applying a limit to the displacement attributable to the sum of the frame's stochastic movement transform and attractive force transform to be a maximum magnitude of absolute distance measured on the universal grid and accumulating any unmoved displacement into a an accumulator variable impacting future move transforms or likelihoods of displacement; calculating the new position of the frame on the universal grid based on the applied transforms and displacement limit; re-calculating aggregate properties of the frame based on the properties of its predecessor frame relations in the traversal order; exposing the recalculated properties of the frame to the next level of the hierarchy in the traversal order; updating the representation of the physical system with the new position and other properties of the frame.
  2. 2 . The method of claim 1 , wherein the stochastic movement transform is based on a random matrix.
  3. 3 . The method of claim 1 , wherein the attractive force transform displaces the frame towards the center of its parent frame by a fixed amount per universal time step.
  4. 4 . The method of claim 1 , wherein the maximum relative displacement limit restricts the displacement of a frame relative to its parent frame per universal time step.
  5. 5 . The method of claim 1 , wherein the detecting collisions operation identifies collisions by comparing positions of the child frames of a given frame in the hierarchy using different means such as binary searches or quadtrees.
  6. 6 . The method of claim 1 , wherein creating a parent frame is performed if a detected collision occurs, associating the collided child frames with the new parent frame one level higher in the frame hierarchy.
  7. 7 . The method of claim 1 , further comprising distributing the calculations of each frame amongst computing threads.
  8. 8 . The method of claim 1 , further comprising distributing the calculations of each frame amongst GPUs.
  9. 9 . A network device, comprising: a transceiver to send and receive data over a network; and a processor that is operative on the received data to perform actions, including: distributing a plurality of frames, representing portions of a physical system simulation (the “Experiment”), to a plurality of different calculation nodes A . . . M (e.g. client devices, peer-to-peer configured or otherwise); wherein each calculation node A . . . M is configured to select or be instructed to select at least one frame j from the plurality of frames, and to perform at least one block in one pass of a Fractal Physics Computation Language (FPCL) algorithm (process 500 ) upon the selected at least one frame j; receiving at least one resulting output result for frame j from a calculation node A . . . M, the at least one resulting output result generated by the calculation node performing at least one one block of one pass of the FPCL algorithm on the at least one selected frame j; if at least one resulting output result indicates that a termination threshold is unsatisfied, then sending the resulting output to another one of the calculation nodes to perform at least one block of one pass of the FPCL algorithm upon the resulting output frame; and if the received at least one resulting output frame indicates that the termination threshold is satisfied, then sending the resulting output frame to each of the other calculation nodes to replace the corresponding selected frame j in the Experiment.
  10. 10 . A method of performing fractal-based physics simulation in a distributed computing environment, the method comprising: transmitting, to each of a plurality of calculation nodes A . . . M, a plurality of frames representing portions of a physical system (the “Experiment”); receiving, at a network device, at least one frame generated by performing at least one pass of an FPCL algorithm (process 500 ) upon the plurality of frames at one of the plurality of calculation nodes; and analyzing, at the network device, the received frame to determine whether it satisfies a termination threshold.
  11. 11 . The system of claim 8 , wherein each of the calculation nodes A . . . M applies a different set of FPCL parameters from the others.
  12. 12 . The method of claim 9 , wherein each of the calculation nodes A . . . M applies a different set of FPCL parameters from the others.
  13. 13 . The method of claim 9 , further comprising: recursively applying the FPCL algorithm to the received frame until the termination threshold is satisfied.
  14. 14 . A system for fractal-based physics simulation, comprising: a plurality of calculation nodes A. M, each configured to execute an FPCL algorithm (process 500 ) on a portion of a physical system simulation; and a network device configured to distribute portions of the physical system simulation to the calculation nodes and to collect and analyze resulting output frames from the calculation nodes.

Description

TECHNICAL FIELD The present invention relates generally to physics simulations and, more particularly, but not exclusively to employing fractal approaches for efficient computational algorithms and computing architectures to the calculation of physical phenomena. BACKGROUND Physics simulations involve predicting the future state, or time evolution, of a physical system. One can think of physics simulations as creating “pictures” of the future, similar to how computer-generated graphics depict the swirling of smoke over a fire or the way shattered glass produces tiny reflections in its shards. As covered in the popular treatment, Chaos: Making A New Science, by James Gleick, “[some scientists] say the twentieth century will be remembered for three things: Relativity, Quantum Mechanics, and Chaos” and envisioning physics as pictures of the future is a technique largely attributable to innovations by Edward Lorenz and his weather simulations. Unfortunately, till this day, our ability to achieve breakthroughs like cold fusion, superconductors, and even cancer cures is fundamentally rate limited by our ability to compute these three great areas as one thing and, therefore, the speed at which we can create these “pictures” of how quantum particles, atoms, molecules, and celestial bodies interact. Reducing or “compressing” calculations while limiting errors is crucial to enabling such innovations. Despite many advances in computation and our understanding of physics, as just one example, simulations of fusion reactions require supercomputers with millions of CPU or GPU cores, and even then, our ability to accurately simulate real-world phenomena remains limited. Quantum Mechanics and General Relativity are based on continuous concepts of space-time, where infinite amounts of computing power are needed to perfectly solve even the simplest systems. This is because, in the mathematical formulations of these theories, there are forces with infinite reach, where every particle can simultaneously influence every other particle. Computing the future states of a real-world physical system based on a continuous space-time requires the continuous mathematics of calculus and non-linear differential equations. This challenge was initially outlined by Henri Poincaré in his work on the Three-Body Problem, which gave birth to the field of Chaos Theory. We now know that the Three-Body Problem, and the more general N-Body Problem, are not solvable in closed form in a continuous space-time model, and perfectly predicting the future state of such systems is computationally intractable. Since there is currently no such thing as a “continuous supercomputer,” we instead use digital computers to simulate physical systems. In computer science, the number of operations required to perform a calculation is characterized by the “computational complexity” of an algorithm using Big O notation. For example, if a particle physics simulation with N particles is constrained such that each particle interacts with only a fixed number of other particles in each time step, then the computational complexity is O(N), which is manageable with modern computing resources. However, if each particle influences the calculation of every other particle simultaneously (without being limited to pairwise interactions), the problem becomes exponentially complex, with computational complexity on the order of O(NN). As an illustrative example, for an O(N) problem where N=1 billion particles, it is feasible to model on modern multi-core processors that perform billions of calculations per second per core. On the other hand, if Nis only 100 and the problem is O(NN), then the number of calculations required is 100100 (a 1 followed by 200 zeros), which is beyond the capability not only of current computers, but any digital computer that could ever be invented. Rather than give up on using computers, physicists often employ numerical methods that introduce approximations to make N-Body problems computationally tractable. By making simplifying assumptions—such as limiting interactions to pairwise effects and neglecting higher-order simultaneous influences—they reduce the computational complexity, albeit at the cost of introducing errors. For instance, if we limit the interactions to pairwise effects without considering simultaneous changes causing recalculations, the complexity becomes O(N2). This is computable for smaller systems with, say, 10,000 particles, but becomes unmanageable as N grows larger. Further approximations can reduce the complexity to O(N log N) or even O(N), but in a continuous space-time model, these approximations introduce additional errors, which may not be acceptable for certain applications. It is these errors that limit our ability to achieve advancements in areas such as cold fusion, new materials, and novel medicines. Mathematics, composed of symbols and operators, serves as the language for expressing physical theories. However, different the