Reasonable Effectiveness of Mathematics – part 2

https://thevyasa.in/2021/06/reasonable-effectiveness-of-mathematics/

MATHEMATICAL PHYSICS

Because of logical consistency, mathematics is always deterministic. Look at the structure of any equation. The initial condition or parameters are represented by the left hand side. The equality sign describes the special conditions to be met to start any interaction: be it at macro level or micro level. Given the initial conditions, the right hand side describes the theorized outcome of the interaction. We are free to vary the parameters of the left hand side. That is our freewill (though our choices or degrees of freedom may be variously limited). Once the initial parameters are set; (math can’t predict this), the right hand side (final outcome), varies correspondingly. This predetermined outcome is mathematics. The equality signs – the special conditions (like temperature threshold to start a chemical reaction), are also predetermined. But it is not defined in a logically consistent way (why that temperature?) – hence not mathematical.

Some say; mathematics, because of its inbuilt logic, writes itself – one can start writing things down without knowing exactly what they are, and the language makes suggestions to proceed. This is the ergodic monkey phenomenon, where a monkey plays with the key-board randomly and the outcome is a master piece of a novel. Though theoretically it is possible as a chance, it does not happen in reality. Others say: Master enough of the basics; and you rapidly enter what sports players call ‘the zone’. Suddenly it gets much easier. You are propelled along. This is the 100th monkey phenomenon of Sheldrake – notion that new skills are learnt with increasing ease as greater quantities of a population acquire them. There is no proof to justify this view beyond chance and functional ease due to repeated use.            

Wigner says1applied mathematics is not so much the master of the function: it is merely serving as a tool. Others say; using mathematics, we can build abstract models without the restrictions imposed by the physical world. This leads to the incompleteness issues, which exploit problems arising out of unnatural mathematics. We see something when the radiation emitted by it interacts with our eyes. We touch the mass that radiates light. Thus we do not touch what we see (radiation) and see what we touch (mass). Nature prohibits reductionism. Whole is a sum of its parts and more. Water is more than 2H and O. A triangle is more than three straight lines. This is natural number theory.  5 has independent perceptual value than 5 ones.  If we can purchase a car in € 5k, with € 1k, we can purchase 1/5 of a car. This may look mathematically valid, but 1/5 of a car is an undecidable proposition. Hilbert’s problem whether mathematics is complete (every statement in the language of number theory can be either proved or disproved)and Gödel’s negative solution arise out of such unnatural mathematics. Brute force approach is similarly unnatural, though sometimes it may succeed by chance.

Wigner is right when he talks about1the succession of layers of laws of nature, each layer containing more general and more encompassing laws than the previous one and its discovery constituting a deeper penetration into the structure of the universe than the layers recognized before. This is the principle that both macrocosm and microcosm replicate each other. As the Minutes of the American Mathematical Society for October, 2005 reported, the theory of dynamical systems used to plan trajectories of spacecrafts and those of transition states of chemical reactions share the same set of mathematics. Wigner is also right that all these laws of nature contain … only a small part of our knowledge of the inanimate world. But he misses the point when he says: All the laws of nature are conditional statements which permit a prediction of some future events on the basis of the knowledge of the present, except that some aspects of the present state of the world…are irrelevant from the point of view of the prediction. In fact, it is most relevant as the probabilistic laws of Nature. The conditional statements show interdependence of all systems in the cosmos. Our sense organs and measuring devices have limited capacity, so that it measures limited aspects in limited intervals. Since time evolution is not uniform, but conditional on interactions, we do not see each step from the flapping of the wings of the butterfly till it turns into tempest elsewhere. The creation is highly ordered and there is no randomness or chaosWe fault Nature to hide our inability to know.
            Wigner says: The physicist is interested in discovering the laws of inanimate nature…. It is, as Schrodinger has remarked, a miracle that in spite of the baffling complexity of the world, certain regularities in the events could be discovered. In an earlier paper3, we have shown that: uncertainty is not a law of NatureIt is the result of natural laws relating to measurement related to causality that reveal a kind of granularity at certain levels of existence. The uncertainty relation of Heisenberg was reformulated in terms of standard deviations, where the focus was exclusively on the indeterminacy of predictions, whereas the unavoidable disturbance in measurement process was ignored. A formulation of the error – disturbance uncertainty relation, taking the perturbation into account, was essential for a deeper understanding of the uncertainty principle. By directly measuring errors and disturbances in the observation of spin components, Ozawa developed a formulation: ε (qη(p) + σ(q)η(p) + σ(p)ε(q) ≥ h/Ozawa’s inequality suggests that suppression of fluctuations is not the only way to reduce error, but it can be achieved by allowing a system to have larger fluctuations. Nature Physics (doi:10.1038/nphys2194) describes a neutron-optical experiment that records the error of a spin-component measurement as well as the disturbance caused on another spin-component. The results confirm that both error and disturbance obey the new relation but violate the old one in a wide range of experimental parameters. Even when either the source of error or disturbance is held to nearly zero; the other remains finite: thus, mutually exclusive. that is said to encode the causal structure of Spacetime. Each event in Spacetime has a double-cone attached to it, where the vertex corresponds to the event itself. Time runs vertically – the upward cone opens to future of this event. The downward cone shows past. But if the light pulse radiates in all directions, it should show concentric spheres and not a double-cone. The trick is done by first taking two dimensions and time as the third dimension. But even then it will be concentric circles and not a conic section. Event horizon is the limit of our vision.

Light Cone is a mathematical model

Not only time is cyclic, but also is unidirectional because ‘now’ is linked to future in a different way; than it is linked to the past. Space, Time and coordinates arise from our concept of sequence and interval. When it is related to objects, we call the interval space. When it is related to events, we call the interval time. When we describe inter-relationship of objects, we describe the interval by coordinates. Present and future are segments of these sequences of intervals that are strictly ordered – future always follows present. The same is not true for past, because any past event can be linked to the present bypassing the specific sequence. This proves unidirectional time. Since the intervals are infinite, space and time are an infinite continuum. We use segments of this analog reality. Thus, our description relates to here-now over a medium scale, which we tend to universalize. This gives a distorted picture.

https://thevyasa.in/2021/06/reasonable-effectiveness-of-mathematics-part-3/