- How to do the calculations of huge numbers of basic entities, large numbers of loops, and numbers of quarks and particles in a finite time on finite computer systems
- What to calculate, that is, what functions to use for fundamental interactions
- How to get the basic forces of physics from a three entity/three effect model
Calculations to explore the mnp Model will need to advance in many different directions. Many issues can be investigated independently. A sample list of just a few of the issues:
- Computational issues
- Efficient modeling of huge numbers of entities
- Display of entities and their contribution to fields (color, hue, pattern, useful alternations, ...)
- Using parallel computation, either networked or threaded. Obviously, tuning the Model by trying different parameters for given calculations can be done on networked machines once the calculations are determined.
- Geometry of coil coverage and length, twisting and stranding on a sphere
- Geometry of time dilation and length compression of coils with movement
- Geometry of a filament making a turn at c while maintaining a separation distance (and what that separation distance might mean to a “sphere”)
- “Fundamental Forces”
- magnetic fields
- static electric fields (perhaps the hardest fields to visualize!)
- moving electric fields and moving charge attraction/repulsion (perhaps the hardest to create and describe!)
- gravitation
- Different Scales of Calculation Not all models will work only with the smallest entities. Some will mix small scale with a large scale influence, some may work only at large scales.
- Entity interactions in field superposition
- Entity formation of filaments
- Entity formation of strands and coils
- Entity interaction to modulate and cancel gravity waves
- Gravity from a moving body may be effectively modeled as a shell of the given mass to examine fields from a moving body. Same with charge.
- Gravity from a large mass affecting coils in electrons
- Gravity from a large mass affecting larger coils in quarks ...
- Gravity from a large mass affecting photons
- Gravity from a large mass affecting individual entities in electric or magnetic fields
- Model the precession of Mercury, the Pioneer gravitational anomalies, and pulsars
Reinventing the wheel isn't all bad if the process leads to deep understanding.
The mnp Model has a number of computational advantages over other theories, though acceptance will be determined by whether the physics works rather than whether computation is convenient. One advantage is that the the basic entities are moving a constant speed c. Another is that they have a uniform and small range of influence and hence a uniform mass. Another is that the entities act in a flat Minkowski space, that curvature and compression result from that interaction and are a function of how matter measures space and time and do not affect the basic entities and their movements.
Entity Representation
Entities would be lightweight objects, preallocated and non-moving during the course of a simulation unless multiple remote or networked processors are involved and the entity leaves a local region. From a programming standpoint, no inheritance or specialization is needed. The interactions are the same for all. Since the three basic entities vary only in axis compared to travel direction, whether to identify the type of entity with an “entity type” is an open question.Basic information for each entity: location, travel direction, axis direction. The axis direction could be an angle in the plane perpendicular to the direction, which would necessitate more computation, or a three vector possibly stored in fixed point. The direction vector needs to be high precision.
At least a single link to the next entity in the region is likely.
Most simulations are likely to be “discrete” with fixed time periods, For the simplest calculations, either the accumulated change to the direction of travel and direction of axis or the new direction of travel and axis will be stored. More complicated models of interaction, discussed later, will require more information with each entity object.
Representing Coordinates
The mnp Model has the computational benefit that the basic entities interact with each other only over a short distance, so that computations of interaction between entities can confine searches for influenced or influencing entities to a small region. Another benefit is that entities normally maintain some minimum separation, so that while the scale is tiny, infinitely small dimensions are never needed.If a whole number represents the coordinates of a region that is some convenient multiple of the influence distance, the resolution of the fractional part of a coordinate needs to be finer than the separation distance normally maintained by entities such as those in a filament or strand. Whether a 16 bit unsigned fraction representing (0 to 1] suffices is not yet known in the mnp Model.
If we intend use an index to determine which region(s) contain an entity, the top bit of the fraction can indicate whether the relevant adjoining region has an index one greater (top bit of fraction is set) or one less (the top bit of the fraction is zero)
Whether such fixed point representations are helpful is another issue. Efforts to calculate in fixed point have been warranted at some points in history but may currently be unnecessary.
Prior to the days of CDC 6600's and 7600's, fixed point calculations were most efficient. Then for a while, supercomputers handled floating point faster than integers. Yes, the young man who doesn't know enough physics to know that a structural Model of the universe is impossible is old enough to remember CDC and to have worked on Xerox Data Systems 940's with floating point handled by input-output to a twenty liter module. In the days of the 8080 and the early IBM PC era, fixed point was again most efficient. At some point with SUPER parallelism and very lightweight processors, we may benefit from fixed point calculations again.
Partitioning Space
A quick look at how to partition the space to be modeled when working with just the tiny entities is included here as a sample of the thinking needed for computation in the mnp Model.If we divide up the space in multiples of the influence diameter, what multiple is most efficient? For now, we will use i~ for twice the minimum distance at which entities no longer influence each other. One can think of i~ as just over the diameter of the region where entities can affect or be affected by an entity at the center of the sphere. For computation, cubes that size are convenient.
If we set the partitions of space at the influence diameter, eight regions need to be investigated for interaction between entities. The influence diameter should represent an open interval (the author's preferred approach) or the divisions should be just slightly greater than the influence diameter. If dimensions are taken to be a power of two times that influence diameter, coordinates can be resolved to an index (potentially large) and a fractional mantissa for which floating round-off should not be a problem. We do not expect to simulate suns at the level of entities, so loss of precision in the index may not be a problem.
If the divisions are larger than i~, an entities position with a cube may allow 1, 2, or 4 regions to be investigated rather than 8. Fewer regions may result in fewer cache misses but will lead to more work checking for overlap and hence influence in those larger regions with potentially more entities.
Entities centers are used to place entities in a region. Bigger regions lead to more comparison but to less need to move entities into other regions. Bigger regions may allow for less memory flogging if regions are kept in separated areas or separate computers.
If n is the region dimension divided by i~, where n >= 1, all neighboring entities can be found in 1, 2, 4, or 8 regions. The formulae for the fraction of searches that require searching 1, 2, 4, and 8 regions is in table 1. “extra” is (n-1)/n:
Probability of Finding All Influenced Entities in n Regions |
||||
Number of Regions |
Formula for Probability |
|||
1 |
extra^3 |
|||
2 |
3(1/n)extra^2 |
|||
4 |
3(1/n)^2extra |
|||
8 |
(1/n)^3 |
The following table then shows how much computation “work” is expected at various region sizes. “Expected Regions to Check” represents that average number of regions that need to be accessed, and the “Expected Volume to Check” represents the average relative number of entities to be checked for overlap and hence influence.
Note that in checking two entities, if their centers are further apart than i~ they will not influence each other. No square root need be involved, since the distance squared can be compared to the influence distance (or the influence radius) squared. Back in the 1980's, skipping a square root might have been important to the development of custom processors at SLAC, but the author is sure many people have had the same idea. In the 2010's skipping a square root is not such a big deal.
Expected values represent the sum of the products of the probability of a situation and the work or benefit involved in that situation, in the classic manner of calculating expected values. Note that the expected work of checking overlap goes up steadily and by more than the square of the region size. The work on the expected number of regions to check is above the inverse of the region size. With regions 8 times as big as the influence diameter, only 67% of the entities in the region will see all influencing or influenced entities within that one region. With regions as big as the influence diameter, the probability of needing to scan 8 regions is 1.
Partitioning Space For Calculations With Fixed
Distance of Influence |
||||||
Region Size |
Regions to Check - Probability |
Expected Regions |
Expected Volume |
|||
i~ multiple |
1 |
2 |
4 |
8 |
to Check |
to Check |
1 |
0.000 |
0.000 |
0.000 |
1.000 |
8.000 |
8.000 |
1.01 |
0.000 |
0.000 |
0.029 |
0.971 |
7.882 |
8.121 |
1.1 |
0.001 |
0.023 |
0.225 |
0.751 |
6.958 |
9.261 |
1.2 |
0.005 |
0.069 |
0.347 |
0.579 |
6.162 |
10.648 |
1.5 |
0.037 |
0.222 |
0.444 |
0.296 |
4.630 |
15.625 |
2 |
0.125 |
0.375 |
0.375 |
0.125 |
3.375 |
27.000 |
3 |
0.296 |
0.444 |
0.222 |
0.037 |
2.370 |
64.000 |
4 |
0.422 |
0.422 |
0.141 |
0.016 |
1.953 |
125.000 |
8 |
0.670 |
0.287 |
0.041 |
0.002 |
1.424 |
729.000 |
It would be possible to investigate regions smaller than the influence diameter, but for regions >= .5 the diameter the number of regions to search will be 27, 18, 12, or 8 so that locality of reference and moving entities to other regions will both be compromised. For regions >= 1/3 diameter, the number of regions will be 64, 48, 36, or 27.
Thoughts About Simulation and Parallel Processing
Discrete simulation, taking fixed time slices, seems appropriate for many of the calculations needed in the mnp Model. Discrete simulation operates by creating a new model of where everything is based on where it was. Need twice memory or location/direction doubled for each particle, processor works completely on one effector. Or works completely on one receiver, for which idea the author wishes to thank Greg Ward Radiance (personal communication, 1987) and G. Ward Rendering with Radiance : the art and science of lighting visualization. (Morgan Kaufmann, San Francisco, 1998) With each pass, we don't move location or direction information, just alternate indexes for subsequent passes. If the model is based on influencing (“shooting” in ray tracing parlance), calculate the influence on each, then a quick, linear second pass is to apply that influence, so no need (or savings) for an index because is a two pass process - calculations are done on fixed offsets within each of the lightweight objects representing an entity. If limits on effect given, we need to determine how many influenced (keep a list if we don't want to rescan) and perhaps how much influence was offered. Add up the overall effect then make another pass to normalize the effects. If limits on effect received, those are normalized in receipt phase. If entity influence works by “if x can't receive part of our influence, give that to another” then computations seem to be in deep trouble.Sorting and merging on parallel processors is quite feasible, though it may or may not be needed in sorting entities into regions. One note - if a processor is merging a number of lists, a second processor could be working from the back of those lists to merge the other direction. Partitioning the merge sets would probably be less efficient. Binary searches on n>1 sorted lists might be interesting.
Linked Lists of Entities in a Region
If we keep linked lists of entities in a region and need to move entities as they move to different regions, is a doubly linked list needed or can we work with a singly linked list efficiently. If the item to be snipped is not the last item, swap with the next item and then place the item (which is now in the next position in the list) where it belongs, at the head of the list. Memory references: item to be moved, next item in the list, head of the list to which the item is moved. Only if the item to be moved is at the END of the linked list (its “next” is zero) do we need to search the list.If the singly linked list has the last item point to the head of the list, we need a method to identify the “end” of the list. For the item to be snipped, we need to check if next is “head of list” and swap with the actual first item in the list and adjust the head of pointer and next pointers appropriately. If the item to be removed is the only entry in the list, then resetting the list header suffices.
Are Octrees Needed?
If the regions to be modeled are huge and the density of basic entities low (the vast majority of regions with less than one entity) then octrees might be an efficient method of locating entities. Since the need for “Modified Newtonian Dynamics” is seen only where gravitons are spaced further than their influence distance, and since there is the question of whether the spacing applies to the 3-d spacing of the gravitons or the 2-d spacing perpendicular to travel, the need for sparse matrices is expected to be limited. The basic entities are everywhere, they're everywhere.What to Compute
Computer scientists enjoy the discussion of how to compute. The domain specialist, the physicist in this case, wants to know WHAT to compute. As described in the main mnp paper, the functions and constants that represent entity interaction might be complicated. The technical side of computations for the mnp Model are interesting and will become important, but the computational issues will ultimately be driven by the physics of the model. The following snippets will illustrate the range of needs.Calculating Entity Interactions
Perhaps we can categorize interaction types, based on “how much total influence can an entity have” “how much influence can an entity have on a single entity” and the reverse, “how much can an entity receive from all nearby entities” and “how much influence can an entity receive from a single entity” which should mirror the second question. Additionally, questions can be asked about whether influence is “sent” when it is apparently not received due to balancing influences. Further, details of when an influence takes effect after being received may be important. Certainly, realizing that stable coils require that the Travel Alignment effect and/or the Axis Alignment effect must be slightly “forward” affects how those alignment tendencies operate and are programmed.At the entity level, influences need not act like classical forces nor like quantum effects. Whatever works, since the universe clearly does function. We can consider influences on a basic entity to be instantaneous. We do not have to operate within entities as if c is constant. Or we could posit that it takes c for influence to travel to the center or some other point which THEN changes orientation or direction. So it could seem, at this time, that the mnp Model has too many degrees of freedom in describing the three interactions of the three basic entities.
Issues of how much an unrestricted entity influences another are NOT issues of computational complexity, nor are whether entities that are within the influence distance have full influence or partial influence based on how much of their “surfaces” are overlapping (which is 0 at influence distance up to maximum when coincident and linear in between, the author's current favorite) or some other function of distance between centers. Complicated transfer functions may slow simulation speed, but do not add to the complexity.
- If the basic entities act on each other with no limitations, so that an entity will have a fixed influence on all the entities around it, no matter how many there are, and an entity will receive an unlimited amount of influence from however many entities are close to it, computation is easy.
- If an entity can receive only so much influence, then a scan at the end of a simulation cycle can limit the amount of influence received in calculating the next position, travel direction, and axis direction.
- If an entity can only send so much influence independent of whether that influence is received, then we need to keep track of how much influence is “offered” than go back and normalize that influence before applying it.
If effectively unused influence is available to influence other entities, the computational complexity goes WAY up, probably more than number of entities squared.
For example, if two entities approach a third from opposite angles, so that the third middle on undergoes no change, do those approaching change direction just as much as if they had changed the direction of the middle one? True, the approaching entities may see each other too unless they are separated by more than the influence distance. It is possible they each graze opposite regions of the middle entity.
If a free entity intervenes closely in a coil, the various entities in the coil will have a balanced effect on the free entity more or less in the longitudinal direction. The mnp Model used to have non-transitive influences (Traction was the early origin of gravity). Now the Axis Alignment effects ARE transitive - if an entity can hardly budge from a loop, all the entities in the loop are budged a tiny bit.
The harder the calculations need to be, the less likely philosophers are to conclude that the universe is just a simulation.
More than just basic entity interaction must be calculated, and more complicated situations may shed light on the functions needed for entities to behave consistently with our universe. The coils in an electron are small since the loops making up the coils are all the same type, so the m filaments that can participate with the coiled charge structure of the electron is small. The mathematics of why modest amounts of additional m filaments are possible in larger shells or shells with more “twist” will be interesting. Tuning may well be an interesting possibility here. Of course, the mnp Model is tuning itself to model the universe we know. Following are some “notes to self” that may make little sense to the most casual observer. Sorry.
Separation would be simpler if it IS transitive. “Safest” for our concepts is if Separation does not lead to increasing speed of the entity. This said, some tales of creation suggest the earliest expansion involved increased velocity due to the Separation effect. So Separation might want to be redirection as possible or even displacement laterally even if the net speed exceeds c a little. Future speculations on “what if the speeds vary a little?” - the varying speed stuff could only be part of fields and never part of matter or photons and so seems too scary to contemplate at present.
Do we need low cunning to limit the amount of influence sent? What about breaking the regions cubes into 1/2 radius and calculating the influence of an entity on each of the 16 cubes in 3 directions and sum up how much would be taken (is it the same?) and then to TAKE influence from the 16 cubes based on how much total could be taken. Seems like cheating to apply influence to a hypothetical region and then receive influence from that hypothetical nexus of influence, but again, whatever works.
Whether photons seem to change direction more easily than change polarity might lead to insight into how Axis Alignment operates for the basic entities that make up photons (called m's in the mnp Model.) Does the redirection depend on where the influencing entity overlaps the entity in the photon?
Gravity will have nothing to do directly with the polarity of a photon, but how it affects the electro-magnetic fields that influence the photon will need to match the known physics.
Sideways Axis Alignment - is it computationally harder to split Axis Alignment into a circumferential component around the line of travel and treat it as “easier” than to just go for Axis Alignment however that pans out? Details may be hard to work out.
Calculating the smoothing effect of incoming gravitons on local events such as rotating systems will be interesting, as will proving (or not) the attenuation of gravity waves as incoming and outgoing gravitons interact.
Earlier Writings on Computations for the mnp Model
The main mnp document uses a few terms the author has been avoiding in the blogs: the basic entities are figments and the energy/propagator part of a photon is called a fhoton. The earlier writings are included here, verbatim.The radius of influence could be a Plankian measure, half that, or something smaller. Early calculations are likely to be “dimensionless.” Rings are smaller or similar to Plankian distances, the radius of influence will be half or less of the ring diameter so that entities are not influenced by those on the opposite side of the ring.
Choosing a model for influence between figments will eventually be important, but for some early computations it may not be important. To establish that “random attraction” as seen by a moving figment was not adequate to maintain velocity, three different models were tried: present in the range of influence, linear dropoff with distance within the range of influence, and squared dropoff within the range of influence. All models produced similar results.
If “sphere surfaces” are pictured as the influence, the amount of surface on a sphere above a latitude is linear with the cosine of that latitude. So if two same radius spheres intersect, the amount of surface “inside” the other sphere is proportional to the distance between the centers: (2r-z)/2r. So interactions could be linear in the local distance (or squared in the local distance).
Thoughts on Limits and Stable Sizes
Coils and filaments should suffice as an explanation for the electron's long lifetime and quantized size.
Fhotons as Gravitons
For light to transmit “gravity” the Proximity effect needs to operate only on figments seen by the attracting figment and perhaps moving toward the attracting figment. The integral of gravitational effects from light directing figments toward traveling back along its path might exceed the energy of the fhoton.
Computation of Heavy Matters
Attraction. We should be able to ignore gravity for the early “what's stable” calculations. To later look at gravity as a local phenomenon aka Proximity. The attraction of all figments based on being close can be done by sprinkling some number of (traveling as always) figments in a region and seeing how the figments move in the region. principles: figments move at constant speed. If 2 figments are “attracted” that means their direction of travel is turned slightly toward each other. “Attraction” is short range, computationally can be a “yes or no” random choice or a random range of responses. We should get a drift of figment directions to align with the concentrations of particles and a drift of figments toward the axis of large concentrations (and large concentrations drifting slightly toward other concentrations). Might need to set lots of figments in rings so they don't go too far. :-)
The math for forces, momentum, and angular momentum at relativistic speeds may be relevant eventually, though it is not needed for early investigation of stability.
Ruminations on Cosmological Calculations of Gravity
Calculations of the effects of gravity in space become extremely difficult in the mnp Model. Free n and p figments attract and affect “local” objects, light, and all figments, but do not travel inter galactic distances until they are organized. A blast of light from a dying star would make the star appear heavy, since the sent out fhotons will direct loose figments back toward the source.
Two consolations to human beings: 1) As my wife says, “The universe will still be there.” 2) In case of difficulty with the universe slowing down or expanding infinitely or contracting or being swallowed by a black hole, see (1).
**** End of Earlier Writings ****
Digression on Storage Issues
A storage medium may need a balance of 1's and 0's (or a charge balance or a spin balance) for stability. Can a code insure a balanced number of 1's and 0's? I'm sure it can. Compressed or not - compression probably enhances balance. When a region is encoded, if bits are counted, the region can be preceded by a number representing the number of bits to be inverted either at the beginning or the end so that the count of 1 and 0 bits matches (+-1 and +- the coding of the leading number.) So from a macro external view, there is no information there, no net charge. Rearrangement suffices, though efficient rearrangement requires external storage or memory.Philosophical Thoughts About Simulation
Is the universe a simulation? Some physicists turned philosophers have asked that question. If the Heisenberg Uncertainty Principle is found to be causative (as suggested by Feynman as the reason electrons avoid the nucleus) rather than descriptive (as suggested by quantum mechanics derivation of h by Griffiths and surely others), then the probability of simulation goes up. If the calculations to simulate the mnp Model need complicated or co-dependent functions, the probability of the universe being a simulation goes down. So is the author rooting for complexity? We will see.The strong form of the Uncertainty Statement: The Heisenberg Uncertainty Principle is only CAUSATIVE as in Feynman's electron exclusion argument if we ARE being simulated
LoL or Droll?
Those who wish to integrate gravity and quantum mechanics might need to think outside the well understood and successful box that physics is in. Off the grid? Weird? Unheard of? The “Unthought.” Gradual evolution toward a unified understanding of physics seems unlikely.Wags might suggest the author is doing no better at emulating Donald Knuth than he is at emulating Michael Faraday. Oh well.
To Affinity and Beyond!