Author Archives: EmpiricalWarrior

The Elephant In The Room

Coincidently with the previous post here defending Pilot Wave Theory, a pair of articles appeared defending the standard interpretation of Quantum Mechanics. Both are dismissive of PWT in a way that underscores the basic incoherence of the modern, math-first approach to doing physics. The first of these articles is a typical screed from Ethan Siegel in which he martials a lot of known scientific facts and then assembles them into a circular argument that assumes the “standard model” to be correct, thus “proving” once again that the “standard model” as described by Siegel and acclaimed by all right-thinking scientists is the One True Model that all must revere. I’m not going to waste time on picking apart this particular circular argument. It is the arguments against PWT that are the subject of this post.

The other article by Philip Ball appeared in Quanta Magazine. In 2018 Ball published an even-handed account of quantum physics, Beyond Weird. This current work is primarily a discussion of a theory of decoherence developed by the physicist Wojciech Zurek. Decoherence is an attempt to explain how a particle evolves from the indeterminate “superposition of states” condition described by the standard interpretation to the “deterministic” state of everyday reality and standard physics where particles are not smeared out but always locally distinct. As with the Siegel article I’m only going to focus on the arguments used to dismiss Pilot Wave Theory from consideration as an alternative to the preferred dogma.

Both authors take the standard interpretation of quantum physics as a given. That standard interpretation can be characterized as an offshoot of Neil Bohr’s claim that the wavefunction represented all that could be known about the quantum state – it was fundamentally indeterminate and nothing could be known about the properties (location, spin, etc.) of a quantum particle prior to its observation (detection, measurement). To this was subsequently added the “explanation” that the indeterminacy was caused by the fact that the particle was not in any particular location prior to detection but was, in fact, smeared out in a “superposition of states”.

It was further stipulated that the smeared out state could not be observed because the very act of observing, or detecting it would cause a mathematical formalism, the wavefunction, to collapse instantaneously ensuring that the particle would always be found at some particular location. That particular location however, could not be predicted except as a probability by the wavefunction. It is necessary at this point to step back and consider what a breathtakingly absurd, not to mention utterly unscientific, account of physical reality that is.

Absurd the standard interpretation may be, but it is nonetheless an unquestioned premise of both Ball and Siegel in their respective articles. The reason it is so casually accepted is not merely because of what might be called the groupthink, dogma, and inertia that pervades the scientific academy. There is at root, the underlying premise of mathematicism – the scientifically baseless belief that mathematics somehow underlies and determines the nature of physical reality. Mathematicism is what allows otherwise rational people to believe that a particle must be smeared out in a superposition of states – because some mathematical wavefunction does not describe where the particle is.

In distinct contrast to this widely accepted wavefunction-only account of quantum behavior there stands Pilot Wave Theory which is not merely an alternative interpretation of the wavefunction-only story but constitutes a mathematically and physically distinct model of the quantum realm. In addition to the wavefunction, PWT also has a guiding equation which describes the interaction between a particle and a pilot wave which produces the statistical outcomes described by the wavefunction. The statistical outcomes of PWT are the same because it has the same statical formalism, the wavefunction.

This brings us to the objections raised against PWT by those who prefer the standard interpretation. Ethan Siegel has this to say on the matter:

Contained in these non-local hidden variable theories are the hopes of everyone who seeks to make deterministic sense of the quantum Universe, and the hope that somewhere, somehow, there’s a way to extract more information about reality and what outcomes will occur than standard, conventional quantum mechanics permits.

This is basically a strawman argument that seeks to characterize PWT as representing a “hope” to extract information or something about reality that the standard model does not permit. This deliberately sidesteps the fact that PWT produces exactly the same statistical results as the standard interpretation while providing a physically realistic description of the behavior of quantum particles as arising from a typical interaction of a particle with a wave.

For his part Philip Ball is a little more nuanced but his objection ultimately boils down to an aesthetic choice:

Others invoke the description postulated by Louis de Broglie and later developed by David Bohm, in which a particle does have well-defined properties, but it is steered by a mysterious “pilot” wave that produces the strange wavelike behavior of quantum objects, such as interference…

All this has always struck me as fanciful. Why not just see how far we can get with conventional quantum mechanics? If we can explain how a unique classical world arises out of quantum mechanics using just the formal, mathematical framework of the theory, we can dispense with both the unsatisfactory and artificial cut of Bohr’s “Copenhagen interpretation” and the arcane paraphernalia of the others.

Ball apparently finds the idea of a pilot wave as “mysterious” and “arcane” while seeming rather sanguine about the empirically baseless idea that a particle can be in a “superposition of states”. An absurd metaphysical proposition like that is somehow more palatable than a “mysterious” but physically plausible one.

Like Siegel, Ball makes no reference to the fact that PWT is not simply a different interpretation of the wavefunction-only model, like Many-Worlds, but constitutes a separate and distinct qualitative and quantitative model of quantum reality, one that produces the exact same statistical outcomes as the standard, reality-challenged version preferred by both authors. PWT does this while providing a plausible physical mechanism that is consistent with the physics of larger scales.

What is most striking about these rather casual dismissals is the seeming preference for mathematical mysticism over the open-ended nature of the scientific endeavor which seeks to understand the physical cause of observations that might at first seem inconsistent and mysterious. Modern theoretical physicists prefer to explain “mysterious” observations with imaginary things that cannot be observed, measured, or detected, rather than do the hard work of investigating the physical cause of those observations that only seem, at first encounter, to be mysterious.

Which brings us to the elephant in the room. The elephant in the room is indicated by a straightforward scientific question, Does the pilot wave of PWT refer to something physical as well as mathematical? A reasonable answer to that question is that there is a good deal of theoretical and observational evidence indicating that the pilot wave is a real physical entity that arises at the interface between a charged particle and the Ambient Electromagnetic Radiation that permeates the Cosmos. There is AER present in every laboratory in which quantum experiments are performed.

Modern Theoretical Physics pays no attention to the AER it swims in. MTP knows that the mass-energy content of an electron is equivalent to the energy of gamma radiation. MTP knows that a charged particle has a polarizing effect on nearby radiation. Despite this MTP does not incorporate the AER present in a typical laboratory environment into its analysis of the quantum behaviors it observes. Instead we get metaphysical prattling about superposition of states and wave-particle duality

MTP knows all about the individual frequencies and the linear rays of light by which we observe distant cosmological objects. The totality of that omnidirectionally emitted radiation, however, makes no appearance in the standard model of cosmology. In its place MTP has substituted the scientifically inert concept of a substantival spacetime. The resulting Big Bang model bears no resemblance to empirical reality.

An elucidation then, of the Pilot Wave Theory beyond its mathematical formalisms to a robust qualitative description of quantum scale behavior points directly at the AER as a causally interacting physical component of physical reality — and that analysis holds on all scales. The pilot wave of the theory is produced, in this conception, by the interaction of a charged particle (electron) with the AER of the laboratory. This qualitative description is made visually compelling by the work on Pilot Wave Hydrodynamics done at MIT by John Bush and his colleagues.

All of these considerations lead inexorably back to a question raised forty years ago by John S. Bell:

Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show us that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?

—— Quoted in the Stanford Encyclopedia of Philosophy article Bohmian Mechanics.

A Brief Defense of Pilot Wave Theory

(Note: parts of this post appear in the comment section of the article being discussed. In that article the author uses the term Bohmian mechanics or just Bohm to refer to Pilot Wave Theory which is a more illustrative name for what is also sometimes called de Broglie-Bohm Pilot Wave Theory. I opt for Pilot Wave Theory (PWT) throughout this post. No matter the name it is the same theory that is being discussed.)

This recent article (Bohmian mechanics has a big problem) published by Tim Andersen, Ph.D. at The Infinite Universe presents an excellent illustration of a simple fact:

Math + Philosophy ≠ Physics

What is on display in Andersen’s article is a painfully contorted effort to dismiss Pilot Wave Theory on mathematical and philosophical grounds. But PWT is at root a phenomenological assertion about the physical nature of quantum systems. It is a physical description with ontologically relevant math. That math may have some shortcomings, though they are hardly as damning as the author would have you believe but the physical description it provides stands head and shoulders above the metaphysical nonsense generated by the standard Wavefunction-Only interpretation.

The author also displays considerable confusion about the nature of PWT, saying correctly at one point with regard to the status of a particle in the theory, “The particle itself is what we measure, and it is always there, whether we measure it or not.” A few paragraphs later, in the context of discussing locality he describes the status of the particle quite differently saying, “This is the price you pay for giving up locality. Because particles are not localized, they cannot be treated as free agents.” But then again we are told, “Bohm’s mechanics accepted nonlocality as the price to have definite particles (realism).

The only thing that can be pointed out here is that in PWT particles always have a definite “local” position while in the standard WO interpretation particles are not localized, but smeared out. Andersen, for some reason, appears to be confused about this distinction which suggests his critique here is not based on a clear understanding of the Pilot Wave Theory.

The author also asserts that PWT “gives up” locality. It does not. What PWT does is add a non-local component explicitly to the model with it’s invocation of a pilot wave. It is the pilot (or guiding) wave that is, like all waves, a non-local phenomenon. The pilot wave is also distinct from the Schrodinger wavefunction which is another point that Andersen seems confused about:

Bohm’s theory has a downside in that the wavefunction can guide as many particles as you like, and it does so instantaneously, allowing one particle to influence another also instantaneously.

On the subject of non-locality, the author admits to being philosophically indisposed to the idea. This despite the clear evidence, that non-local effects are part of the phenomenological world. The problem seems to be an inclination to see it as matter of having to choose between mutually exclusive possibilities — either the world is local or it is non-local. That is, however, a false dichotomy.

Physical reality has both local (particle) and non-local (electromagnetic wave) components. Another way to say this is, particles are 3-dimensional (local) and electromagnetic waves are 4-dimensional (non-local) phenomena. That is typical of the complementary dualities often found in nature; it is a fundamental aspect of physical reality that the common reductionist viewpoint tends to obscure in its search for the One. It’s not a case of either/or; it’s both. Complementary dualities are the engines of physical reality.

In PWT the statistical outcomes of quantum experiments are attributed to the interaction of particles with waves. This is consistent with the rest of physical reality where waves and particles can be observed as distinct, interacting phenomena.

In contrast the, standard Wavefunction-Only interpretation of QM attributes the outcome of quantum experiments to a superposition of states, wherein a particle is said to be “smeared out” with no definite location until it is observed, at which point it is always found at a specific, though only statistically predicted, position. This miraculous turn of events, from smeared out to definite location is attributed to a “collapse of the wavefunction”. That miraculous and inexplicable and unobservable collapse is then said to present a “measurement problem”. It could also be thought of as a credibility problem.

So this strained, empirically baseless account of quantum behavior requires a non-local event, the “wavefunction collapse” which happens instantly at the moment the particle is detected. The author criticizes PWT for its explicit non-locality while failing to note that non-locality is also prominent in the standard WO interpretation. The reason non-locality is unavoidable is that non-local phenomena (waves) are as much a part of physical reality as local phenomena (particles). The standard interpretation does not acknowledge that duality and consequently it has a problem when non-locality makes its unwelcome appearance in a model that only recognizes locality.

PWT is consistent with the observed wave and particle nature of physical reality on all other scales, whereas the WO model produces a borderline irrational account of particles being spread out like waves only to become particles when you look at them.

Interestingly there are tabletop experiments conducted at MIT that produce directly observable quantum-like effects that are visually quite striking. The images produced in these clever experiments can be helpful to visualize and understand the mechanics of PWT: https://thales.mit.edu/bush/index.php/4801-2/

Throughout his diatribe Andersen teases a “Fatal Flaw” that will be revealed eventually, as if it were some thrilling conclusion to a schlock horror movie. When the momentous revelation arrives it doesn’t fail to disappoint. The Fatal Flaw requires taking this extravagant mathematicist-fantasyland conceit into consideration:

This is relativity, but not in ordinary 4D spacetime like Einstein’s relativity. It is relativity in an infinite-dimensional space called a Hilbert space. Hilbert space is where the wavefunction actually lives and moves.

Apparently the Fatal Flaw is that the PWT math is not mathematically equivalent to Heisenberg’s math which is equivalent to Schrodinger’s math somewhere over there in Hilbert space (in all its infinite glory). Unfortunately, for this FF theory, PWT has an additional guiding equation which describes a particle’s interaction with a pilot wave. 

So the criticism that PWT isn’t equivalent mathematically to the Heisenberg formalism that is equivalent to Schrodinger’s math is a fatuous mathematicist argument that has nothing to do with physics. Not being equivalent to S&H is a feature not a bug.

PWT does what the Schrodinger’s and Heisenberg’s math doesn’t do, describe a physical system in physical and mathematical terms that are consistent with empirical reality. Implying as Andersen does that a plausible, coherent, physical and mathematical model like PWT should be set aside because it doesn’t fit well with some arbitrary mathematicist conventions does not constitute a reasonable scientific argument.

PWT is not equivalent to those models – because it is more mathematically complete than either of them. What makes it more complete is the fact that it actually describes the physical interaction that produces the observed results — WO does not do that.

PWT provides a plausible physical account of a quantum physical process, with mathematics, that produces the observed outcomes of this type of quantum experiment while neither Heisenberg nor Schrodinger describe a process that is coherent or rational in scientific terms. All you get is statistics and absurd metaphysical handwaving. It is the Wavefunction-Only version of Quantum Mechanics that has a big problem: the model doesn’t make physical sense — it’s just some math. It gets the right answers on the test but it doesn’t know why the answers are right.

The Twins Paradox & The Immaculate Acceleration

Discussions around the twins paradox showcase the way Modern Theoretical Physics discounts actual physics and logical consistency in favor of mathematical convenience. The twins paradox is a thought experiment where one twin remains on Earth while their sibling travels at some significant fraction of the speed of light to a distant star and then returns home at a similar speed. The result is not in dispute, the traveling twin will have aged significantly less than the twin who remained on Earth due to the time dilation effect described by Relativity Theory.

The problem arises when someone attempts to explain the age discrepancy in terms of Special Relativity rather than General Relativity. In SR, which applies only under inertial conditions, that is in the absence of gravity or acceleration, there is an apparent but not physically real time dilation. For example two travelers in interstellar space moving at some constant velocity with respect to each other can each consider themself to be at rest and the other traveler to be in motion and each will observe the others clocks to be running slower than their own. In fact neither clock is running slow and the time dilation is an illusion induced by the relative motion. This apparent time dilation is described by a mathematical equation known as the Lorentz transform.

In cases where gravity or acceleration are involved General Relativity is the only proper framework for understanding the time dilation situation. This is because under GR conditions time dilation is not merely an illusion owing to a constant relative velocity. An observer at the surface of the Earth will see a a clock in orbit running faster than their own while an astronaut in orbit will observe all clocks on Earth to be running slower than their own clock.

Unlike under SR conditions where each observer sees the other’s clock as running slower though neither is actually running slow, in a gravitational field all observers will agree that clocks lower in the field (where gravity is stronger) are running slower than clocks higher up (where gravity is weaker). This is gravitational time dilation. It is an observed effect that is properly accounted for by GR.

There is a similar time dilation effect associated with an accelerated observer that is also described by GR. This is the case of the traveling twin and this is where the “paradox” arises. For some not clearly explicable reason some physicists like to claim that the Lorentz transform is sufficient to analyze the situation and it is not necessary to account for any acceleration (or deceleration).

A reasonable objection to this would be that if the Lorentz transform is relevant then it applies to both the traveler and stay-at-home twin and consequently there should be no age difference when the traveler returns. Each twin should have perceived the other to be aging more slowly but in fact they aged at the same rate throughout the time of the voyage.

The retort to this objection is that the traveler changes reference frames by turning around and that justifies applying the Lorentz transform only to the traveler therefore affecting a net aging differential. It is also claimed that the periods of acceleration are irrelevant to the time dilation effect and can be ignored. The problem with this is that it is an inadequate, you could even say shoddy, analysis of the physics of any such voyage.

The following discussion of a plausible interstellar trip is based on the section of this Wiki account labeled Specific example.

In the specific example considered a 1g acceleration for 9 months is mentioned but quickly dismissed for the sake of mathematical convenience:

To make the numbers easy…This can also be modelled by assuming that the ship is already in motion at the beginning of the experiment and that the return event is modelled by a Dirac delta distribution acceleration.

Do not fail to check out the Dirac delta function AKA the Immaculate Acceleration. There is no better illustration of the inanity of the mathematicist approach to doing physics. What you wind up with is an analysis that is completely unrelated to a realistic account of any plausible interstellar voyage. The resulting claim, that the returning twin will have aged only 6 years while their sibling will have aged 10, is simply wrong. The math is just the math but the given result is not possible because the assumptions of mathematical convenience render the analysis physically impossible which means the result is physically meaningless.

To carry out such a trip the traveler must first accelerate away from the Earth with a sufficient period of acceleration so as to achieve a velocity that approaches light speed. A 1g acceleration (equivalent initially to the gravitational effect at the surface of the Earth) would require 9 months to achieve a velocity of 8/10ths the speed of light. An equivalent deceleration would be required to arrive at the destination and another acceleration/deceleration would be required on the return trip. If the distance to the remote star is 4 light years then the overall distance to be travelled is 8 lightyears from the Earth’s reference frame. In 9 months at an average velocity of .4c then the distance covered during acceleration is .4c x .75y = .3ly. There are 4 such acceleration/deceleration events over the entire trip so 1.2 total lightyears are travelled at an average of .4c. That leaves the remaining, 6.8ly, to be travelled at .8c which requires 8.5 years. Again this is the earth bound perspective

So the four acceleration intervals of 9 months sum to 3 years travel time plus 8.5 years for the two .8c intervals yielding a total trip time of 11.5 years from the Earth’s perspective. Using the inverse Lorentz factor .6 for the .8c interval the Earth twin calculates the traveler’s local elapsed time to be 5.1 years for the 8.5 year interval and 2.7 years for the 3 year accelerated interval (using a .9 inverse Lorentz factor for the .4c average velocity during acceleration intervals) The Earth twin then expects their sibling to have aged 5.1 + 2.7 = 7.8 years to their 11.5 years – a differential of 3.7 years. These results are markedly different than those presented in the Wiki example and confirm that it is simply wrong to ignore the acceleration intervals and treat the entire trip as an Immaculate Acceleration event with a constant velocity.

This error stems entirely from the desire “To make the numbers easy“, i.e. the preference for mathematical convenience over physical accuracy. The results speak for themselves. You cannot ignore the intervals of acceleration by pretending they take place instantaneously. It is rather absurd to expect that such an unphysical approach would work. But such is the state of Modern Theoretical Physics.

To this point the conceit that the Lorentz transform is appropriate for calculating the time dilation effect for the travelling twin has been granted. But is it actually appropriate? And if so, can it be deployed in a physically meaningful way? That is a larger topic and will be the subject of the next post.

The ALER And Quantum Theory (revised)

Note to Subscribers: This is a substantial modification of yesterday’s post. I’ve published it separately so that subscribers would receive a notification of the revision. I also substantially revised the preceding post on the ACER but directly modified the original which did not produce a notification to subscribers. I apologize for any confusion and will henceforth publish all substantive revisions to existing posts separately. Thank you for your patience.

The preceding post discussed the ACER (Ambient Cosmic Electromagnetic Radiation) and some of its implications. The ACER is present and detectable, at least in part, at the surface of the Earth. The Cosmic Microwave Background was discovered by a ground based antenna and in fact all pre-satellite-era cosmological observations were made at the surface of the Earth and involved the detection of various component frequencies of the ACER.

The situation is somewhat different in a closed room. Most of the ACER that does penetrate the atmosphere will not penetrate the walls and ceiling of the room. There will nonetheless be considerable ambient electromagnetic radiation some of it internally generated (artificial lighting, thermal radiation) and some measure of externally sourced penetrating radiation like radio frequencies. So in an enclosed room there will be an Ambient Local Electromagnetic Radiation (ALER) analogous to the ACER but having a somewhat different frequency distribution.

Some measure of ALER is present in every room humans might typically occupy including rooms dedicated to scientific experiments. Consider the room you are in as you read this. Not only is it suffused with visible light, there are also thermal radiation, radio frequencies and possibly some stray high energy UV and X, and gamma rays. 

We live our lives immersed in a sea of electromagnetic radiation whether sitting in a room or sitting out under a clear/cloudy, day/night sky. In the aggregate electromagnetic radiation provides the underlying 4-Dimensional framework of the Cosmos — at all scales.

Of particular interest with regard to Quantum Theory are the rooms where quantum experiments such as the double-slit are performed. Those rooms are also steeped in electromagnetic radiation, the ALER. 

In one version of the double-slit experiment electrons are slowly fired at the double slit screen and are subsequently detected at a second screen. As the electrons accumulate they form an interference pattern which is said to prove or at least demonstrate that electrons are both waves and particles. This would be a reasonable conclusion except this explanation takes no notice of the ALER through which the electron is traveling.

An electron affects an electromagnetic field and and is in turn affected by the field. This is not news; it is well established physics. Standard quantum theory pays no attention to the presence of the ALER and its interaction with the free electrons that are the subject of the experiment. 

The ALER is a chaotic field, streaming from all directions and comprised of many frequencies and yet the theoretical explanation of the double-slit experiment takes no account of the inescapable fact of the ALER’s presence and its unavoidable interaction with the electrons. Instead we get a feckless story about a wave-particle duality state that cannot be verified by observation, only believed in.

Such is the state of Quantum Theory — belief is obligatory because reality is irrelevant. In this it is of a piece with the rest of Modern Theoretical Physics. It presents an absurd, incoherent and at root irrational account of physical reality, one that bears almost no resemblance to the physical reality our lying eyes and instruments actually reveal to us. As with Ptolemaic cosmology, the math can be said to “work” but the physics is simply wrong.

Surprisingly this inability to grapple with the nature of physical reality is a matter of choice. There has been an alternate mathematical treatment of quantum mechanics known for more than 70 years. Bohmian mechanics models the double-slit experiment as an interaction between a particle and a guiding or pilot wave.

The mathematics of Bohmian mechanics does not explicitly describe the ALER only an abstract mathematical wave but the implications are clear. It is entirely within the reach of current mathematics to model physical reality without veering off into unrealistic metaphysical conjectures about wave-particle duality or particles that exist in a “superposition of states”. 

The price of this realistic framework is that the math is a bit more complicated than the “wavefunction only” model preferred by a “consensus” of the members of the scientific academy. The high priests of this consensus are quite obviously willing to sacrifice a coherent, logical and realistic account of quantum phenomena on the altar of mathematical convenience. Such is the state of Modern Theoretical Physics.

The ALER and Quantum Theory

The preceding post discussed the ACER (Ambient Cosmic Electromagnetic Radiation) and some of its implications. The ACER is present and detectable, at least in part, at the surface of the Earth. The CMB was discovered by a ground based antenna and in fact all pre-satellite-era cosmological observations were made at the surface of the Earth and involved the detection of various component frequencies of the ACER.

The situation is somewhat different in a closed room. Most of the ACER that does penetrate the atmosphere will not penetrate the walls and ceiling of the room. There will nonetheless be considerable ambient electromagnetic radiation some of it internally generated (artificial lighting, thermal radiation) and some measure of externally sourced penetrating radiation like radio frequencies. So in an enclosed room there will be an Ambient Local Electromagnetic Radiation (ALER) analogous to the ACER but having a somewhat different frequency distribution.

Some measure of ALER is present in every room humans might typically occupy including rooms dedicated to scientific experiments. Consider the room you are in as you read this. Not only is it suffused with visible light, there are also thermal radiation, radio frequencies and possibly some stray high energy UV and X, and gamma rays. We live our lives immersed in a sea of electromagnetic radiation whether sitting in a room or sitting out under a clear or cloudy, day or night, sky.

Of particular interest in this regard are the rooms where quantum experiments such as the double-slit are performed. They also are bathed in electromagnetic radiation. In one version of the double-slit experiment electrons are slowly fired at the double slit screen and are subsequently detected at a second screen. As the electrons accumulate they form an interference pattern which is said to prove or at least demonstrate that electrons are both waves and particles. This would be a reasonable conclusion except this explanation takes no note of the ALER through which the electron is traveling.

An electron affects an electromagnetic field and and is in turn affected by the field. This is not news; it is well established physics. Standard quantum theory pays no attention to the presence of the ALER and its interaction with the free electrons that are the subject of the experiment. The ALER is a chaotic field, streaming from all directions and comprised of many frequencies and yet the theoretical explanation of the double-slit experiment takes no account of the inescapable fact of the ALER’s presence and its unavoidable interaction with the electrons. Instead we get a dubious story about a wave-particle duality state that cannot be verified by observation, only believed in.

Such is the state of Quantum Theory – belief is obligatory because reality is irrelevant. In this it is of a piece with the rest of Modern Theoretical Physics – presenting an absurd, incoherent and at root irrational account of physical reality, one that bears almost no resemblance to the physical reality our lying eyes and instruments actually reveal to us. As with Ptolemaic cosmology, the math can be said to “work” but the physics is simply wrong.

The ACER, The ERDG And Gravity

A mathematical similarity between gravity and electromagnetism has long been noted and commented upon. There are also qualitative analogies that are occasionally mentioned but beyond noting the fact that the gravitational effect around a a gravitating body falls off at the same 1/r2 rate as the density of the electromagnetic radiation being emitted by the body, little thought is given to this rather striking overall correlation. Coupled with the observation that light moving through a gravitational field behaves as if it were traversing a medium with a density gradient, there would seem to be at least a strong indication of a causal relation between gravitational effects and the electromagnetic field (EMF) especially since a gravitational field is only posited, not observed

It will argued here that all gravitational effects can be attributed to the interaction between matter and electromagnetic fields. First, two definitions:

  1. the ACER (Ambient Cosmic Electromagnetic Radiation} that pervades the Cosmos. This is the cosmological scale field that constitutes The Spectrum of the Universe as described in the document of the same name. This radiation does not have a density gradient but is approximately uniform in distribution around any free-standing body.
  2. the ERDG (Electromagnetic Radiation Density Gradient) which is the aggregate EMF of all the electromagnetic radiation being emitted omnidirectionally by a radiating body. The strength of that field falls off as 1/r² and therefore it has a density gradient

The Deflection of Light in a Gravitational Field

The EMRG comprises the medium through which the ACER (all external radiation)travels in the vicinity of a radiating/gravitating body. The ERDG is a transparent medium with a density gradient. Light passing through a “gravitational” field behaves as if it were passing through a medium with a density gradient. The ERDG fully accounts for the “gravitational” effect of the curvature of incident light — without invoking an otherwise invisible gravitational field. Essentially the ERDG constitutes a particular type of EMF that causes the effect traditionally attributed to an undetected gravitational field.

Gravitational Attraction

Initially let us consider the isolated case of one star, ignoring for the moment any nearby stellar neighbors. The star in this case has surrounding it both the omnidirectionally sourced ACER and its own locally self-produced ERDG. The ACER that falls on the star directly is absorbed and eventually reemitted as part of the ERDG. There is an additional inflow of radiation from the ACER that can be attributed to the ERDG curving the ACER passing closest to the star onto the surface.

All of this radiational inflow is omnidirectional onto the surface of the star. In a sense the inflow attributable to curvature can be thought of as a “pulling on the nearby ACER but as it is a “pulling” in all directions there is no net effect on the star’s motion through the ACER. The ACER is essentially an inertial medium. Since the ACER is being curved by the ERDG of the star it is reasonable to think of the ERDG as the gravitating medium of the star. The curvature of passing radiation in the vicinity of a gravitating medium is an observed fact predicted by General Relativity but it is also predicted and observed behavior for light passing through any transparent medium with a density gradient

Now we introduce a small, but not infinitesimal, nearby test particle, a planetary object the size of the Earth, with some velocity relative to the star that is not significant with respect to Relativity Theory. Let us also assume that the initial trajectory is toward but not directly at the star. This test body will interact with the ACER that surrounds it in exactly the same way that the star does, with the exception being that the planet is only passively reradiating the radiation from the star that falls upon it – it is a passive not active emitter of electromagnetic radiation. Consequently the radiation density at the planet’s surface will be <<< than the radiation density at the surface of the star and the “gravitational effect” of the planet on the star-planet system will be much weaker.

As the planet draws closer to the star two simultaneously acting effects are taking place. Along the line that joins their centers of mass over an area that is defined by the projection of the planet’s shadow, the star will be curving the nearby passing ACER but no direct inflow will be taking place in that shadow region only a “pulling in” of the passing ACER that lies between the two bodies. A similar but smaller inflow is taking place over the surface of the planet facing the star.

At the same time the planet is passing through the star’s ERDG, with the surface of the planet closest to the star experiencing a stronger “gravitational effect”. That effect is directly attributable to the ERDG – the radiation density from the star is higher on the closer side of the planet. The primary higher density effect is that physical processes slow down in the denser radiation.

The result of the higher density will be the same as it is in any medium with a density gradient – among other things it will slow the passage of the nearer surface through the ERDG medium producing a curvature of the planet’s path while also inducing a rotation. The curved path of the planet is analogous to the curved path of light being caused by a slowing of light in the denser portion closer to the surface of the star. This behavior is typical in mediums with a density gradient. Gravitational attraction then is entirely explicable as an interaction effect between matter and an EMF with a density gradient (ERDG).

Galaxy Clusters

Gravitational lensing around a galaxy cluster requires considerable amounts of undetectable Dark Matter to “fit” the observations (of gravitational lensing) to the standard model. The excess observed curvature implied relative to calculations based on the mass distribution can be attributed to the fact that the x-ray hot plasma that constitutes up to 90% of a cluster’s mass has a higher radiation output than an equivalent stellar mass would produce. That excess high energy radiation will produce a stronger “gravitational” effect than the observed mass alone would predict using the standard gravitational models that correlate gravitational effects with mass density rather than radiation density.

This higher radiational output is due to the diffuse nature of of the cluster gas which is radiating outward from within its entire volume whereas a star is only radiating from its surface area; effectively the x-ray hot gas has a much lower Mass/EMR ratio than a star or galaxy.

This x-ray hot gas constitutes 90% of the cluster’s mass and produces a large excess of high density, high frequency radiation. And that radiation tracks the “gravitational” effects. That’s what’s there. What’s not there is Dark Matter. If gravitational effects are attributable to the density of the cluster’s EMF rather than the unobserved mass-dependent “gravitational” field, the mass discrepancy problem evaporates and with it the need for Dark Matter. Mercifully.

The Cosmological Redshift

It is a demonstrable fact that the General Relativity based math (Schwarzschild solution) that describes gravitational redshift can also be used to calculate the cosmological redshift for an Expanding Spherical Wavefront (ESW) of electromagnetic radiation. The resulting redshift resembles the cosmological redshift of the standard model which uses a different GR solution (FLRW) describing an Expanding Universe. See: Gravitational Redshift & Expanding Spherical Wavefronts

The mathematics involved here is not satisfactory because the Schwarzschild solution does not take into account the variation of light speed in a gravitational field. However, the qualitative picture presented, of an ESW losing energy as it is gradually absorbed by its encounters with galaxies and other intervening matter, demonstrates once again that an effect that can be described with standard gravitational math can be understood as arising from the direct interaction of Matter and electromagnetic fields.

The ACER is the aggregate of all the ESWs streaming through the Cosmos. A galaxy’s ERDG becomes a continuous outflow of ESWs on cosmological scales as the density gradient becomes negligible.

Coda

It is certainly fair to say that none of the foregoing constitutes “proof” that Matter-EMF interactions are responsible for all the various observed effects attributed to gravity. However science does not deal in “proofs” – those lie in the realm of mathematics and mathematics is not science. Science and physics deal in empirical evidence and the facts as presented here constitute, at minimum, strong evidence that the observed gravitational effects are correlated with observed Matter-EMF interactions.

As mentioned these correlations are not unknown and have been remarked upon elsewhere (excluding the ESW section) but that is all that has transpired. No serious research, either empirical or theoretical, has been conducted to determine if that correlation indicates a causal relation. Yet, MTP offers no explanation for the physical cause of gravity at all. Why then, this peculiar incuriosity? It seems attributable to the scientific academy having some legacy math that has been handed down for a century or more (if you count Newtonian gravity) and though neither the Newtonian nor Einsteinian gravitational models work on the scale of galaxies and galaxy clusters there exists a dogmatic belief, despite this obvious evidence to the contrary, that those models constitute Universal Laws.

The irrational result is that modern theoretical physicists appear to believe they know everything there is to know about gravity because they have some math (that doesn’t work well) and therefore there is no reason to do any research into what they do not know (the cause of gravity) because if it was important someone would have taught them about it in graduate school. Something like that. There really is no sensible explanation for the situation.

When Are Physicists Not Physicists?

The answer to that question is straightforward: When they are babbling about mathematical models that have nothing to do with physics. When they are bloviating on the nature of physically meaningless concepts like Singularities. Here is a textbook example from that Journal of Mathematicism, Quanta Magazine.

In the fashion of modern theoretical physics the existence of singularities is first attributed disingenuously to Einstein: “Singularities are predictions of Albert Einstein’s general theory of relativity.

That statement is simply false. The Singularities characteristic of Black Holes are predictions of the Schwarzschild solution to the GR field equations, while the Singularity of the Expanding Universe model (Big Bang) is a prediction of the FLRW solution to the GR field equations. It should be noted that both Black Holes and the Expanding Universe are also predictions of the respective mathematical solutions.

In order to arrive at their results it was necessary for both Schwarzschild and FLRW to make a number of simplifying assumptions. Schwarzschild left out a critical prediction of GR, that the speed of light varies with position in a gravitational gradient; FLRW decided that it was reasonable rather than self-contradictory to solve GR for a non-relativistic universal framework. Both “solutions” produce Singularities that are a consequence of their simplifying assumptions. Singularities are mathematical artifacts that have no physical significance.

The article acknowledges that some physicists understand that Singularities are not physically meaningful: “… singularities are widely seen as “mathematical artifacts,” as Hong Liu, a physicist at the Massachusetts Institute of Technology, put it, not objects that “occur in any physical universe.”

But what fun is that? The article quickly pivots to a discussion of some additional arcane mathematical models in which the Singularities are also present and “are proving hard to erase“:

The British mathematical physicist Roger Penrose won the Nobel Prize in Physics for proving in the 1960s that singularities would inevitably occur in an empty universe made up entirely of space-time. More recent research has extended this insight into more realistic circumstances. One paper established that a universe with quantum particles would also feature singularities, although it only considered the case where the particles don’t bend the space-time fabric at all. Then, earlier this year, a physicist proved that these blemishes exist even in theoretical universes where quantum particles do slightly nudge space-time itself — that is, universes quite a bit like our own.

Note that all of these models treat space-time as a substantival entity despite the fact that there is no scientific evidence to support the common mathematicist belief that space-time is a physical, causally-interacting substance. This lack of evidence for an essential component of the models means that none of them have anything of scientific value to say about the nature of the Cosmos we actually observe.

It should also be noted that the inability of the modelers to “erase” the Singularities in their models is a mathematical problem that has nothing to do with physical reality which does not contain Singularities. It’s just some math, not physics and the people purveying these models as if they had some physical significance are not physicists – they are just mathematicists.

A Relativistic Cosmos vs. The Unitary Universe

The limitation of light speed to a finite maximum means that we do not have and most importantly, cannot have any knowledge of the current state of Andromeda (the nearest galaxy) which lies 2.5 million lightyears distant. It follows that we cannot have any knowledge regarding the “current state” of any of the galaxies that lie beyond Andromeda. This situation extends all the way out to our current observational range-limit which is now in excess of 10 billion lightyears.

There are two reasonable conclusions that can be drawn in light of this factual state of affairs.

  1. We do not and cannot have any knowledge of a “current state of the Universe.”
  2. The Universe of the standard model of cosmology does not exist in physical reality. This conclusion is consistent with the observed disconnect between the SMC and empirical reality – none of the defining elements of the SMC are observed in the Cosmos

It has always been this foundational assumption of a Unitary (simultaneously interconnected) Universe that undermines the SMC’s ability to render a coherent description of the Cosmos we actually observe. The idea that the Cosmos constitutes a simultaneously interconnected entity, such that we can speak scientifically of its “current state” or its origin, is belied by standard physics. Put simply there ain’t no such animal as the Unitary Universe of the standard model.

The belief that the Cosmos is a Unitary Universe and the older belief that the Earth is at the center of the Cosmos, are both wrong; they are simply wrong – about the nature of physical reality. As with geocentrism, no further progress in our understanding of the Cosmos will be made until the Unitary Universe is retired to the dustbin of history.

The Cosmos we observe is relativistic in nature – we are at the center of our observable Cosmos. Other observers in other galaxies would be at the center of their observable Cosmos which would at best only partially overlap with our own. There is no universal frame in the Relativistic Cosmos, there are only local frames. That statement is consistent with Relativity Theory. The UU is not consistent with Relativity Theory.

The relativistic nature of the Cosmos is widely acknowledged but not well understood and it is not incorporated into the SMC, as can be seen in this common depiction of a light cone:

This image is graphically correct but the labeling is incoherent, beginning with the apex being labeled Observer. It should instead be given the spatio-temporal designation Here And Now. The Future Light Cone should be labeled Emitted LC while the Past Light Cone should be labeled Received LC. The Hypersphere Of The Present should be labeled the Imaginary Hypersphere Of The Present to make it clear that for any observer at any specific Here and Now that imaginary hypersphere has no physical meaning – it does not exist in any local observer’s Cosmos.

The UU of the SMC is a model of the Imaginary Hypersphere Of The Present that is completely inaccessible to direct observation, measurement or detection. It is not part of our physical reality. Since science is restricted to the study of physical reality it follows that the SMC is not a scientific model; it is only a mathematical model of an imaginary, metaphysical conceit.

Although the SMC is based on a solution to the field equations of General Relativity, that solution assumes the existence of a universal frame, but that assumption is inconsistent with GR which does not have a universal frame. As a consequence, the resulting solution, the FLRW equations that are the basis of the SMC, is not a relativistic model and that means it is also not a realistic scientific model. The SMC is just some simplistic math based on a set of ill-formed ideas about the nature of physical reality concocted 100 years ago when scientists were still arguing whether the galaxies were part of or separate from the Milky Way.

The Relativistic Cosmos, in contrast, is entirely consistent with the light cone depiction. Any Here And Now in Relativistic Cosmology is unique and is not simultaneously connected to other Here And Nows on cosmological scales. That we cannot know the “current state of the Cosmos” is entirely consistent with RC. In a sense this is because RC is consistent with General Relativity which does not have a universal frame. Relativistic Cosmology is congruent with both theory and observation, the Unitary Universe is not. The Cosmos we observe is fundamentally relativistic in nature.

Island Universe Cosmology v4.0

Note: Island Universe Cosmology is not, nor does it aspire to be, a cosmological model. Instead it is a kind of survey of the empirical evidence, as presented in our direct cosmological observations, that are then strung together with some standard physics. It will never achieve a unified picture of physical reality because physical reality itself is not a unitary entity. It will always be a open-ended compendium of overlapping Island Universes.

1. Basic physics

  • The maximum speed of light in the Cosmos is 3×108 m/s.
  • The nearest galaxy is 2.5 million lightyears distant and the furthest galaxies are in excess of 10 billion lightyears away. 
  • It is not possible to have knowledge of the current state of the Cosmos “now”. 
  • It is not possible for the Cosmos to exist as a simultaneously interconnected Universe.

2. Two fundamental states

  • Matter – 3-Dimensional, localized objects with rest mass.
  • Energy – 4-Dimensional, non-local, electromagnetic radiation – massless, transverse expanding spherical waves. Electromagnetic radiation (EMR) is the fundamental form of energy. Expanding spherical waves (ESW) are the cosmological-scale form of EMR.
  • ESWs are standard physics.
  • Everywhere there is no matter in the Cosmos, there is energy in the form of EMR.
  • ACER, the Ambient Cosmic Electromagnetic Radiation flowing throughout the Cosmos
  • Matter emits and absorbs energy (EMR).
  • Space and time are relational concepts not substantive, causally interacting entities.

3. Extent of the Cosmos is unknown and assumed unknowable.

  • The observable Cosmos is limited to the distance implied by the cosmological redshift.
  • The night sky is bright at 2.7K (CMB) – resolves Olbers’ paradox.
  • The CMB is the redshifted light from the most distant galaxies – those that completely cover the dome of the sky, blocking any radiation from more distant galaxies.

4. On cosmological scales (radius > 100 million lightyears)

  • EMR should be modeled as successive Expanding Spherical Wavefronts
  • ESWs are emitted by galaxies
  • Gradually absorbed by outlying galaxies and other matter.
  • Redshift is caused by the gradual absorption (energy loss) of an ESW by intervening matter.
  • A cosmological redshift can be calculated by applying standard gravitational redshift math to an ESW at successive cosmological distances

5. The speed of light in the Cosmos is not a constant.

  • As stipulated by Einstein and observed (Shapiro delay), the speed of light varies with position in a gravitational field, slowing as the field strengthens.
  • Since E/m=c2 it follows directly that as the speed of light declines, a material body undergoing gravitational collapse should shed mass by releasing energy (EMR).
  • Consequently any gravitational collapse will be self-limiting and should produce a compact, luminous, highly gravitating body with an intrinsic gravitational redshift.
  • Quasars fit that description.
  • There is a considerable body of observational evidence that quasars are not at their redshift-implied distances.

6. Gravity, Matter, and Electromagnetic Radiation

  • All gravitational effects arise from the interaction of matter and energy.
  • Energy density gradients around massive bodies track gravitational effects.
  • The cosmological redshift is caused by an energy-matter interaction (Section 4.)

7. A proper geometric framework for the Cosmos uses polar coordinates. 

  • Appropriate for modeling expanding spherical wavefronts.
  • Appropriate for modeling the local point-of-view of the Cosmos from any 3D location because any 3D local observer is at the center of their unique observable Cosmos.
  • The POV frame does not have physical significance i.e. the shell implied by a given radial distance is not a physical shell or epoch. The shell or epoch only exists with respect to the observer; other observers in different cosmological settings will see different shells/epochs, similar in structure but different in content.

Earlier version: Island Universe Cosmology v3.2

Gravitational Redshift & Expanding Spherical Wavefronts

An earlier post on Expanding Spherical Wavefronts made a qualitative argument that an ESW should sustain an energy loss as it expands through the Cosmos. In a more recent post it was argued that gravitational effects are a consequence of matter/electromagnetic-radiation interactions. As a follow-up, this post offers a quantitative demonstration that standard gravitational-redshift-based-math can be applied to an ESW to generate a cosmological redshift correlated with distance:

Terms & Definitions:
z = ((1-(rs/Resw))^(-1/2))-1
rs = Schwarzschild radius = 2GM/c2
Res = successive radii for an Expanding Spherical Wavefront in lightyears
Resw = successive radii for an ESW in meters
M = Calculated mass for a sphere at selected radii, assuming an average cosmological density of 1E-26 kg/m3. The average density is a free parameter in the model but the results are highly sensitive to this particular value. A variation of +/- 10% produces outcomes that seem unrealistic.

All in all, this is nothing more than a proof of concept – but it is an intriguing one. Two pieces of standard physics can be put together to produce a cosmological redshift-distance relation that is similar to the one presented by the standard model. The three graphs of redshift-distance are taken from the table and illustrate scale differences over the range of the table. A short discussion follows the 3rd graph.

Res (Ly) Resw (meters) rs M (kg) z
17.50E+077.08E+232.22E+191.48E+461.57E-05
21.00E+089.44E+235.25E+193.52E+462.78E-05
32.50E+082.36E+248.20E+205.50E+471.74E-04
45.00E+084.72E+246.56E+214.40E+486.96E-04
51.00E+099.44E+245.25E+223.52E+492.79E-03
61.50E+091.42E+251.77E+231.19E+506.32E-03
72.00E+091.89E+254.20E+232.82E+501.13E-02
82.50E+092.36E+258.20E+235.50E+501.79E-02
93.00E+092.83E+251.42E+249.50E+502.60E-02
103.50E+093.30E+252.25E+241.51E+513.59E-02
114.00E+093.77E+253.36E+242.25E+514.77E-02
124.50E+094.25E+254.78E+243.21E+516.16E-02
135.00E+094.72E+256.56E+244.40E+517.78E-02
145.50E+095.19E+258.74E+245.85E+519.65E-02
156.00E+095.66E+251.13E+257.60E+511.18E-01
166.50E+096.13E+251.44E+259.66E+511.43E-01
177.00E+096.61E+251.80E+251.21E+521.73E-01
188.00E+097.55E+252.69E+251.80E+522.46E-01
199.00E+098.49E+253.83E+252.57E+523.49E-01
201.00E+109.44E+255.25E+253.52E+525.02E-01
211.10E+101.04E+266.99E+254.68E+527.50E-01
221.20E+101.13E+269.07E+256.08E+521.24E+00
231.30E+101.23E+261.15E+267.73E+523.10E+00
241.31E+101.24E+261.18E+267.91E+523.71E+00
251.32E+101.25E+261.21E+268.09E+524.74E+00
261.33E+101.25E+261.24E+268.28E+527.00E+00
2713,350,000,0001.26E+261.25E+268.37E+521.00E+01
2813,400,000,0001.26E+261.26E+268.47E+523.49E+01
2913,405,000,0001.26E+261.26E+268.48E+521.75E+02
3013,405,100,0001.26E+261.26E+268.48E+522.39E+02
3113,405,200,0001.26E+261.26E+268.48E+526.48E+02
Graph1 Rows 1-6 of table
Graph2 Rows 1-26
Graph3 Rows 1-27

Discussion

There are several interesting aspects to these results. The initial radius of 75M lightyears is arbitrary and lies just inside the 100Mly radius that can be considered to encompass the “local” Cosmos. The final radius in the model goes asymptotic at the range imposed by the math.

What is happening in the math is the Schwarzschild radius (rs) is catching up with the Expanding Spherical Wavefront being modeled. This is because the rs is increasing in proportion to the enclosed mass which is increasing as Resw3 while the ESW itself is only increasing as Resw2. This is the inverse of what happens (according to the Schwarzschild solution to GR) in the case of a mass undergoing gravitational collapse. In that situation the collapsing body converges inward toward the rs as the mass remains constant.

It should be noted that the rs is an artefact of the model; it is a coordinate singularity and not physically significant. This indicates that the model has broken down at that point by virtue of having produced a division by zero result. It can be argued that this is a consequence of the model not taking into account the variation in the speed of light in a gravitational gradient.

It is also striking that the redshift of the ESW model goes asymptotic at approximately the same cosmological distance (13.4Gly) that the standard model redshift does (13.8Gly). The difference is that in the ESW model the redshift is a consequence of the energy loss to the expanding spherical wavefront attributable to its gradual absorption by intervening galaxies. In the standard model the energy loss is a consequence of a model-inferred but unobservable “universal expansion” – which leaves the lost energy physically unaccounted for.

One other point of note, in the ESW account of redshift the cosmological conditions at the source of the redshifted light are assumed to be approximately the same as they are in our “local Cosmos”. In the standard model, of course, the cosmological conditions at the largest implied redshift distances are thought to be significantly different due to the nature of the evolving “expanding Universe” that the model assumes. The recent JWST observations contradict the standard model’s picture of an evolving “Universe”.

It is not polite to discuss DWRT

I recently commented on an astrophysicist’s blog regarding a claim I consider ahistorical and inaccurate. The comment was deleted. I don’t have a problem with that – a blog is a personal space. However, I was responding to a specific claim about the origin of General Relativity that is both common and false. What follows is the one paragraph remark I was commenting on and my now-deleted response which admittedly ranges a bit further than simply refuting the quote requires:

The equivalence of inertial mass with gravitational charge is a foundational principle of general relativity. There are flavors of GR in which it falls out explicitly, e.g., Yilmaz’s gravity. But it is basically an assumption.

The equivalence of inertial and gravitational mass was an observed and measured fact known to Galileo and Newton. It was not an assumption of GR. The mid-20th century extensions of what is now called the Weak Equivalence Principle were little more than conjectures of mathematical convenience “suggested” by Robert H. Dicke. They had nothing to do with the development of GR.

Along with John A. Wheeler’s aphoristic, empirically baseless, invocation of a causally interacting spacetime, Dicke’s two extensions of the WEP were surreptitiously hung on Einstein’s General Relativity producing a grotesque variant that by rights should be known as Dicke-Wheeler Relativity Theory. It is DWRT that has been the officially taught version for the better part of 50 years although the D-W distortions are almost always attributed to Einstein. He would have puked.

It is DWRT that prompts otherwise rational people to insist that, despite theoretical and empirical evidence to the contrary, the speed of light is some sort of universal constant. It is DWRT that promotes the false claim that Einstein explained gravity as being caused by the curvature of space.

As far as space itself goes, it is a relational concept exactly like distance. That is all the evidence supports. In fact, space is best understood as the aggregate of all distances that separate you from all the other things in the Cosmos that aren’t you. Substantival space is a mathematicist fiction that has no scientific basis.

Throughout the Cosmos, everywhere within our observational range where there is no matter there is only electromagnetic radiation. That is an empirical fact. Everywhere people imagine they see space there is electromagnetic radiation. At any given 3D location that radiation is flowing omnidirectionally from all the omnidirectional emitters (stars and galaxies) within range. That is what we observe and that is how we observe.

As Mach surmised we are connected to the rest of the Cosmos or at least to those objects within range. That non-simultaneous connection is via electromagnetic radiation – that is what’s there. Until recently no one had bothered to do a full survey of what might be called the Ambient Cosmic Electromagnetic Radiation. The authors of this interesting paper seem to think they are the first. Everybody else was too busy looking for some dark stuff apparently.

Modern theoretical physics is all theory and no physics; it consists of nothing but the unrelenting and fruitless effort to prop up two inert century old models whose assumptions of mathematical convenience were lapped by physical reality decades ago. Tinkering with obsolete mathematical models does not constitute a scientific endeavor even if that is all that has been taught for the last 40 years.

Inertia, Gravity, & Electromagnetic Radiation

This very interesting paper originally caught my attention because it demonstrates that Einstein rejected what the paper calls the “geometrization” of gravity and did so throughout his career not just at the end of his life. On a recent rereading I was struck by something else which is easy to forget – the subtly of Einstein’s thought.

The geometrization of gravity is an awkward term because it elides the central problem which is the reification of spacetime. It is well known that Einstein’s Relativity Theory employs the geometric math of Gauss-Riemann. What is at issue is whether that geometric math refers to a physical spacetime that causally interacts with matter and energy (electromagnetic radiation). Many argued that it did while Einstein rejected that interpretation as unphysical and uninformative.

Beyond the issue of what Einstein did not believe, the paper illuminates a seldom discussed subject – what Einstein did believe Relativity Theory accomplished, the unification of gravity and inertia. This unification is not found in the famous gravitational equation of General Relativity but in the lesser known geodesic equation. From the paper:

We found that (i) Einstein thought of the geodesic equation in GR
as a generalisation of the law of inertia; (ii) in which inertia and gravity
were unified so that (iii) the very labeling of terms as ‘inertial’ and
‘gravitational’ respectively, becomes in principle “unnecessary”, even if
useful when comparing GR to Newtonian theory.

While it is well understood that the Equivalence Principle* played a role in Einstein’s thought process while developing GR the importance of the geodesic equation as a formal realization of the EP is certainly not widely acknowledged as far as I am aware. The implications of that unification are profound.

One of the peculiarities of the modern theoretical physics community is their apparent disinterest in determining the physical cause of the gravitational effect. The reason for this disinterest is a philosophical attitude known as instrumentalism – if some math describes an observed outcome adequately then a causal explanation is superfluous. Instrumentalism is a variant of the scientifically baseless philosophical belief called mathematicism.

The purpose of science is to investigate the nature of physical reality, not to sit around fiddling with poorly constructed mathematical models of physical reality that do not remotely make sense when you inquire of the model, What does the math mean with regard to the physical system being modeled? The answer that typically comes back is a peculiar kind of bullshit that can be thought of as Big Science Babble.

Superposition Of States is Exhibit A of BSB on the quantum scale. Almost all quantum scale babble winds up at SOS or at Wave-Particle Duality. SOS tells us that an electron is never at a particular position until it is observed.

When an electron is detected it is always, not surprisingly, at a particular location but according to the mathematicists at all other times when not being observed the electron is in a SOS – it is spread out over all of its possible locations as described by some math (the wavefunction). How do they know this? Because the math doesn’t know where the electron is, so it can’t be anywhere in particular. Sure, of course.

BSB is rife on the cosmological scale. According to the standard model of cosmology the Cosmos is 95% made up of some invisible stuff while the stuff we actually observe provides the remaining 5%. How do scientists know this invisible stuff is there? Because it has to be there to make the Big Bang model work and everybody knows the BB model is correct because they said so in graduate school, so the invisible stuff has to be there, like it or not. Sure, of course.

At the root of all BSB, of course, is mathematicism. A mathematical model dictates an absurd story about physical reality which we are then supposed to believe without evidence because math determines the nature of physical reality. If mathematicists with pieces of paper saying they are scientists believe in BSB, shouldn’t you? No, you should not.

Physical reality is an afterthought to mathematicists for whom only math is of interest. That’s why no effort is being expended in the scientific academy to understand the physical cause of gravity; research funding is controlled by mathematicists. And since they already have some math that kind of works (as long as reality contains things it does not appear to contain), well that’s good enough – for mathematicists.

In real science, physical events and behaviors occur as a consequence of physical interactions. Those interactions can be matter/matter (collision), matter/radiation (emission, absorption, reflection), or radiation/radiation (interference) in nature. There is a good argument to be made that all observed gravitational and inertial effects arise as a consequence of matter/radiation interactions:

  1. By observation, everywhere in the Cosmos that there is no matter, there is electromagnetic radiation.
  2. Light traversing a gravitational field behaves as it does in a transparent medium with a density gradient. All approximately spherical gravitating bodies emit electromagnetic radiation omnidirectionally with a density gradient that falls off as 1/r2.
  3. The gravitational effect surrounding a spherical gravitating body falls off as 1/r2.
  4. The gravitational field then is just the Ambient Local Electromagnetic Radiation field surrounding a gravitating body.
  5. In the intergalactic regions, far from any significant gravitating bodies there is only the ubiquitous Ambient Cosmic Electromagnetic Radiation.
  6. The ACER is, to a good approximation, isotropic and this cosmically sourced electromagnetic field does not have a density gradient. It can be thought of as the inertial field.
  7. This unified physical account of gravity and inertia is consistent with Einstein’s mathematical description of a unified gravity and inertia in the geodesic equation.**

_____________________________________________________________________________________________

*The Equivalence Principle Einstein employed is now known, since the mid 20th century, as the Weak Equivalence Principle to distinguish it from later, dubious extensions, added with little scientific justification after Einstein’s death in 1955.

**The forgoing does not constitute conclusive evidence that gravity is an effect of matter/electromagnetic-energy interactions – but it is evidence based on empirical observations. In contrast, there is no empirical evidence for the concept of gravity as a fundamental force.

Forces are themselves not things, they are effects. Of the four fundamental forces claimed by science to exist, only the electromagnetic force has any empirical basis. In fact though electromagnetic radiation is no more a “force” than a golf club is a “force”.

A golf club exerts a force on a golf ball by striking it, resulting in an acceleration of the ball. That applied force can be quantified using known mechanical laws. The term force describes that interaction but force is just a descriptive term for the interaction between the golf club and golf ball, it is not the golf club nor is it a separate thing in itself. The same analysis applies to EMR; it is not a force but it can exert a force when it strikes a physical object.

Photons are not particles

In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a “discrete quantity composed of an integral number of finite equal parts”, which he called “energy elements”. In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets. He called such a wave-packet a light quantum.
https://en.wikipedia.org/wiki/Photon (20Jun24)

Photon energy is the energy carried by a single photon. The amount of energy is directly proportional to the photon’s electromagnetic frequency and thus, equivalently, is inversely proportional to the wavelength. The higher the photon’s frequency, the higher its energy. Equivalently, the longer the photon’s wavelength, the lower its energy… The photon energy at 1 Hz is equal to 6.62607015×10−34 J
https://en.wikipedia.org/wiki/Photon_energy (20Jun24)

The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value h = 6.62607015×10−34 J⋅Hz−1.
https://en.wikipedia.org/wiki/Planck_constant (20Jun24)

The meaning of the foregoing should be clear. Despite the claims of particle physicists that the photon is a particle, it was in its original conception and is in its current quantitative description a wave phenomenon. A photon is not a particle like a proton or a billiard ball. A photon is never at rest with respect to any material body; it is always moving at the local speed of light with respect to all material bodies. A photon does not behave like a three dimensional particle, it is a wave quantum.

A wave quantum is the smallest possible wave; it is a wave of one wavelength as defined above. The illustration below is of a wave consisting of two consecutive photons. This is a poor representation of the reality of electromagnetic radiation which on astronomical and cosmological scales is emitted omni-directionally by stars, galaxies and other luminous bodies. To a local observer though light arriving from a distance seems to consist of streams or rays of light. This image is of a subsection of such a stream.

Electromagnetic radiation does not consist of a stream of tiny particles simply because Max Planck was forced to treat the emission of light as having a discrete minimum. What is described by the math is a single wavelength which is the minimum for a wave of any frequency. Half waves and quarter waves etc. don’t exist.

That does not mean a wave with half the wavelength of a longer wave cannot exist, just that for any given frequency a single complete wave cycle of one wavelength defines the minimum wave energy. Converting this wave minimum to a “particle” was a categorical error and it has a formal name, QED or Quantum Electrodynamics.

In Richard Feynman’s 1985 book QED, based on four popular lectures he had given a few years earlier, he makes this rather odd case for the light as particle “theory:

The strange phenomenon of partial reflection by two surfaces can be explained for intense light by a theory of waves, but the wave theory cannot explain how the detector makes equally loud clicks as the light gets dimmer. Quantum electrodynamics “resolves” this wave-particle duality by saying that light is made of particles (as Newton originally thought) but the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the probability that a photon will hit a detector, without offering a good model of how it actually happens.

So to summarize that last sentence, saying light is made of particles was a great advancement for science that represented a retreat by physics into incoherence. I can’t argue with that. It is also not clear why the particle theory is superior to the wave quantum understanding of Einstein. Surely wave mechanics could have been modified to accommodate the fact that one wavelength is the minimum wave.

Instead Feynman goes on to describe a strange system resembling vector addition where the direction of “arrows” representing possible particle paths is determined by a frequency counter clock, in a backdoor maneuver to introduce wave-like behavior into the particle model so it can mimic wave interference patterns. This fits nicely with the standard quantum babble about a superposition of states, the condition where a particle’s position cannot be predicted except as a probability distribution which is interpreted to mean that the particle is in many positions at once. Thus the retreat into incoherence.

The particle theory of light is just another screwup of 20th century theoretical physics (there were quite a few). It should be put on the historical-curiosity shelf along with the Big Bang next to geocentric cosmology. Future historians can point to these physically absurd dead-end theories as textbook examples of how not to do science. Theory driven physics always winds up as empirically baseless metaphysical nonsense; the human imagination has never been a good guide to physical reality.

Groupthink, Dogma & Inertia

Re: this Ethan Siegel article.

If we ever want to go beyond our current understanding, any alternative theory has to not only reproduce all of our present-day successes, but to succeed where our current theories cannot. That’s why scientists are often so resistant to new ideas: not because of groupthink, dogma, or inertia, but because most new ideas never clear even the first of those epic hurdles, and are inconsistent with the established data we already possess.

Strangely enough that is a pretty good description of why modern theoretical physics appears to be in fact quite dogmatic and inert. If new ideas can be dismissed for not immediately clearing the “epic hurdles” that theorists prescribe, then they amount to nothing more than preemptive barriers to new ideas. This greatly favors the orthodoxy.

No research funding is available for new ideas that don’t hurdle the barriers. They function as a defensive bulwark protecting the standard model from unorthodox incursions.

The only new ideas that are quickly adopted are those that fit the old model to new data. The favored model can be continuously revised (new epicycles, dark matter, etc.) into agreement with unpredicted new data. The “epic hurdles” only apply to new ideas that challenge the orthodoxy. New ideas needed to salvage the old model are adopted without reservation.

And so here we are, stuck with a cosmological model that is in an exactly analogous situation to that which existed with respect to Ptolemaic Cosmology prior to Kepler. Ptolemy’s model could be massaged into agreement with most observations and repeatedly tinkered into agreement with new, more accurate data, but only to some degree. So it is with the Big Bang model. Math is like that, but math is not physics.

Like PC, the BB model has at its core two fundamental misperceptions about the nature of physical reality. PC assumed geocentrism and that bodies orbited the earth in perfect circles. Only once geocentrism and perfect circles were set aside could science advance.

The two misperceptions underlying the BB are first, that the Cosmos can be mathematically modeled as a simple, unitary, simultaneously-existing entity – a Universe. Secondly, the observed cosmological redshift can be attributed to some form of recessional velocity. The inevitable consequence of those assumptions is an Expanding Universe model – the BB.

The first assumption has been falsified by known physics. The speed of light has a finite maximum such that it requires light from the most distant objects we currently observe more than 10 billion years to reach us. It therefore follows that we have no knowledge of the simultaneous state of the Universe because it is physically impossible to have such knowledge. It is a metaphysical conceit, not a demonstrable scientific fact, that such a Universe even exists.

The second assumption is false because it depends on the first being true. It is meaningless to assume something that requires something that does not exist (the Universe) to have a property (expansion) that cannot possibly be observed.

As was the case in Kepler’s time, the only solution to the dogmatic inertia that cripples modern cosmology is to discard the foundational assumptions of the standard model and start over, by rigorously assessing the data on hand without the distorting blinders of the old model.

Over the last century there has been an enormous increase in our knowledge of the Cosmos but due to the dogmatic inertia of the scientific academy all of that new knowledge has not generated any new ideas because it has all been run through the meat-grinder of the standard model. The result has been a dogmatic and inert BB model that describes in excruciating detail a Universe that does not exist and bears no resemblance to the Cosmos we actually observe.

Kepler had an advantage that modern researchers do not – he was not dependent on the dogmatic and inert modern scientific academy for funding. Modern cosmology will remain an unscientific digression into prescientific orthodoxy until it’s research funding is driven by the physics researchers investigating, by observation and measurement, physical reality.

Theoretical modelers in such a system would be required to produce models that reflect the physical phenomenon uncovered by physics researchers. The math must follow the physics if cosmology is to be a science.

The failed Big Bang model needs to be consigned to the dust bin of history where it can serve as an object lesson in how not to do science.

Mathematicist Follies & the DWRT

Here are some choice tidbits from a recent Tim Anderson article titled Zero-point energy may not exist. I’m always supportive of any effort to drag theoretical physics back into contact with empirical reality so the suggestion that ZPE may not exist is at least promising. It even suggests the possibility that modern theoretical physics might emerge from its self-imposed exile in Plato’s cave, the cave-of-choice in this case being mathematicism.

In reading the article any hope of a scientific restoration is dashed, as one is quickly immersed in sea of mathematicist illogic. Here for instance is the “reasoning” that underlies the ZPE concept:

…it means that nothing has non-zero energy and because there is an infinite amount of nothing there must be an infinite amount of energy.

While it is clear that the author is distancing himself from the ZPE concept, that account of the underlying “reasoning” gives rise to the simple question, how did such flamboyantly illogical nonsense gain any traction in the scientific community? The answer of course is mathematicism which is itself a flagrantly illogical proposition, just not recognized as such by the denizens of the mathematicist cave. Then there is this little gem (emphasis added):

Quantum field theory, which is the best theory of how matter works in the universe that we have, suggests that all matter particles are excitations of fields. The fields permeate the universe and matter interacts with those fields in various ways. That is all well and good of course. We are not questioning that these fields exist. The question is whether a field in a ground state has any measureable effect on matter.

So in their “best theory” the universe is permeated by many fields and matter is an excitation of those fields. Physical reality however only contains one observed field and that is the electromagnetic field which is the aggregate of all the electromagnetic radiation that permeates the Cosmos. That radiation is constantly being emitted by all the luminous matter that also permeate the Cosmos. There are no additional observed fields as described by QFT. 

Despite the fact that the QFT fields are not observed the author does not wish to question their existence. Why? Mathematicism, of course. If a math model says something is there and physical reality declines to offer any evidence in support of such a conjecture, the mathematicist position is that the math is correct and reality is perversely withholding the evidence.

Imagine we have a sensitive Hawking radiation detector orbiting a black hole. The detector is in a state of free fall, meaning that it experiences no gravitational forces on it.

This last bit invokes a wholly imaginary thought experiment involving imaginary radiation emitted by an imaginary black hole. Without any empirical basis or logical connection to known physics, it has no scientific significance even if the “experiment” somehow reflects badly on the ZPE concept. In that, it only amounts to an illogical argument refuting an illogical concept.

The second sentence also presents a widely promulgated claim that has no basis in physics. The idea that an observer or detector in free fall experiences no gravitational forces on it is purely unphysical nonsense. An observer or detector can only be in a state of free fall if they are experiencing a gravitational force. The typical basis for this claim is that the observer is prohibited from making observations that would clearly show the presence of a gravitating body and thus demonstrate the presence of a gravitational field.

Einstein is usually credited with this view but in fact his conception of the equivalence principle was highly constrained and did not extend to fundamentally illogical claims like the one made above. The version of the equivalence principle Einstein employed is now called the Weak Equivalence Principle.

The two extensions of the equivalence principle contrived and adopted after Einstein’s death, the disingenuously named Einstein EP (he had nothing to do with it) and the Strong EP have no logical, scientific or theoretical justification. They were merely conjectures of mathematical convenience proposed by the physicist Robert H. Dicke, who along with his colleague John A. Wheeler, concocted a distorted variant of Einstein’s Relativity Theory. That variant is presented today as Einstein’s RT but it is a separate theory and should have it’s own name — Dicke-Wheeler Relativity Theory. 

It is in DWRT that you will find the EEP and SEP as well as a reified version of spacetime which is said to causally interact with matter and energy causing the gravitational effect and facilitating the Expansion of the Universe. There is no empirical evidence supporting those ad hoc additions to ERT. They are simply mathematicist conjectures that have no scientific basis or logical connection to physical reality. In modern theoretical physics though, they are treated as axioms — true by definition. 

Mathematicism is the principle driver of the Crisis in Physics. The reason for this is simple: math is not physics. The controlling paradigm in modern theoretical physics, however, is that math is the basis of physics and mathematical models determine the nature of physical reality. That paradigm is a philosophical belief that has no scientific basis.

As a consequence of mathematicism theoretical physicists espouse two standard models that describe a physical reality containing a large number of entities and events that are not part of the physical reality we actually observe. Modern theoretical physics does not constitute a science so much as a cult of belief

You have to believe in the expanding universe, in dark matter and dark energy, in quarks and gluons. You have to believe that the speed of light in a vacuum is a universal constant even though it cannot be measured as such and is not constant in General Relativity — at least according to Einstein.* You have to believe in these things because they are not demonstrably part of physical reality. Scientists don’t traffic in beliefs but mathematicists do and so there is a Crisis in Physics

 * The speed of light is a universal constant according to Dicke-Wheeler Relativity Theory.

Simultaneously published at Medium.

Denial Of The Deluded

The New York Times has a recent guest article entitled The Story of Our Universe May Be Starting to Unravel. It is in some ways good to see doubts about the Standard Model of Cosmology surfacing in the mainstream press. What the authors, an astrophysicist and a theoretical physicist, have on offer though is some weak tea and a healthy dose of the usual exculpatory circular reasoning.

The authors do point out some of the gaping holes in the SMoC’s account of the Cosmos:

  • normal” matter — the stuff that makes up people and planets and everything else we can see — constitutes only about 4 percent of the universe. The rest is invisible stuff called dark matter and dark energy (roughly 27 percent and 68 percent).
  • Cosmic inflation is an example of yet another exotic adjustment made to the standard model. Devised in 1981 to resolve paradoxes arising from an older version of the Big Bang, the theory holds that the early universe expanded exponentially fast for a fraction of a second after the Big Bang

That’s a start I guess but then we get this absurd rationalization for simply accepting the invisible and entirely ad hoc components of the SMoC:

There is nothing inherently fishy about these features of the standard model. Scientists often discover good indirect evidence for things that we cannot see, such as the hyperdense singularities inside a black hole.

Let’s be clear here about this so-called “indirect evidence“; all of it essentially boils down to model dependent inference. Which is to say, you cannot see any evidence for these invisible and/or impossible (singularities) things unless you peer through the distorting lenses of the simplistic mathematical models beloved of modern theoretical physicists. People who believe that mathematical models determine the nature of physical reality are not scientists, they are mathematicists and they are deluded – they believe in things that, all the evidence says, are not there.

Not only are mathematicists not scientists, they are not good mathematicians either. If they were good at math and found that one of their models was discordant with physical observations they would correct the math to reflect observations. What mathematicists do is correct reality to fit their math. That is where the dark sector (dark matter & dark energy) come from – they added invisible stuff to reality to make it fit their broken model.

A mathematician did come up with a correction to Newtonian dynamics that had been inaccurately predicting the rotation curves of disk galaxies. Mordehai Milgrom developed MOND (Modified Newtonian Dynamics) in the 1980s and it was quite successful in predicting galactic disk dynamics.

Unfortunately the mathematicists had already off-loaded their problem onto reality by positing the existence of some unseen dark matter. All you have to know about the state of modern theoretical physics is that after 40 years of relentless searching and failure to discover any empirical evidence there remains a well-funded Dark Matter cottage industry, hard at work seeking evidence for the non-existent. This continuing search for that which is not there represents a betrayal of science.

It might appear that the authors here are not mathematicists given that they seem to be suggesting that the SMoC is not sacrosanct and needs to be reconsidered in its entirety:

We may be at a point where we need a radical departure from the standard model, one that may even require us to change how we think of the elemental components of the universe, possibly even the nature of space and time.

Sounds promising but alas, the reconsideration is not to be of the foundational assumptions of the model itself but only certain peripheral aspects that rest on those assumptions such as “…the assumption that scientific laws don’t change over time.” Or they suggest giving consideration to to this loopy conjecture: “…every act of observation influences the future and even the past history of the universe.

What the authors clearly do not wish to reconsider is the model’s underlying concept of an Expanding Universe. That assumption – and it is only an assumption of the model – was adopted 100 years ago at a time when it was still being debated whether the galaxies we observed were a part of, or separate from, the Milky Way. It was, in other words, an assumption made in ignorance of the nature and extent of the Cosmos as we now observe it. The authors treat the Expanding Universe concept as though it had been handed down on stone tablets by some God of Mathematicism:

A potent mix of hard-won data and rarefied abstract mathematical physics, the standard model of cosmology is rightfully understood as a triumph of human ingenuity. It has its origins in Edwin Hubble’s discovery in the 1920s that the universe was expanding — the first piece of evidence for the Big Bang. Then, in 1964, radio astronomers discovered the so-called Cosmic Microwave Background, the “fossil” radiation reaching us from shortly after the universe began expanding.

For the record, Edwin Hubble discovered a correlation between the redshift of light from a galaxy and its distance. That is all he discovered. It is an assumption of the model that the redshift is caused by some form of recessional velocity. It is also an assumption of the abstract mathematical physics known as the FLRW equations that the Cosmos is a unified, coherent, and simultaneously existing entity that has a homogenous and isotropic matter-energy distribution. Both of those assumptions have been falsified by observations and by known physics.

Also for the record it should be noted that prior to the discovery of the Cosmic Microwave Background Radiation predictions by Big Bang cosmologists ranged over an order of magnitude that did not encompass the observed 2.7K value. At the same time scientists using thermodynamic considerations made more accurate predictions.

The belief in an Expanding Universe has no scientific basis. It is a mathematicist fantasy, and until that belief is set aside, the Standard Model of Cosmology will remain a crappy, deluded fairy tale that does not in any objective way resemble the magnificent Cosmos we observe.

Spherical Wavefronts & Cosmological Reality

Expanding Spherical Wavefronts are standard physics:

Credit: Gong Gu, https://fr.slideserve.com/oster/ece341-electromagnetic-fields-powerpoint-ppt-presentation

The Expanding Spherical Wavefronts depicted above are physical entities. They illustrate the behavior of light as emitted omnidirectionally by a “point source” emitter. Point source, as always in physics, is a mathematical term of art, there being no physical meaning to the mathematical concept of a dimensionless “point”. A source can be treated mathematically as point-like, however, for an observer distant enough that the emitting source’s dimensions are small relative to the radial distance to the observer.

In the foregoing sense, a typical galaxy can be treated as a point source at large ( >100 million lightyears) distance. The nested shells of the model can be considered as representing either successive positions of a single wavefront over time or an instantaneous representation of continuously emitted successive wavefronts from a typical, omnidirectional emitter such as a galaxy.

This nested shell model can also be invoked to illustrate a ubiquitous inverse electromagnetic phenomenon. Replacing the emitter with an observer, the nested shells can be seen as representing notional spheres existing at various, arbitrary radial distances from the observer. At any given radial distance of cosmological significance from the observer the notional shell will have on it some number of galaxies. Such a notional shell also defines a notional volume which must also contain some number of galaxies. This geometrical situation is only relevant to the observer; it is not, as in the ESW case, a physical entity.

Elaborating a bit, we can define a cosmological radial unit of r = 100 million lightyears. That radial unit then defines a sphere with a surface area proportional to r2 and a volume proportional to r3, For illustrative purposes we make a simplifying (and unrealistically low) assumption that any r3 unit volume contains on average 1000 galaxies.

Observers whose observational range extends out 1 radial unit will observe their “universe” to contain 1000 galaxies. If those same observers improve their technology so that their range of observation extends to 2 radial units they will find themselves in a “universe” that contains 8000 galaxies. If their range doubles again to 4r their “universe” will now contain 64,000 galaxies.

Every time the observational range doubles the total number of galaxies contained in the newly expanded “universe” will increase by a factor of 8 or 23. Of that 8-fold increase 7/8 of the total number of galaxies will lie in the newly observable portion of the total volume. This all follows from straightforward geometrical considerations.

Now let us return to the shell model centered on an omnidirectional emitter. The same geometric considerations apply here but this time with respect to an Expanding Spherical Wavefront. At 1r a wavefront will have encountered 1000 galaxies, at 4r, 64,000 galaxies and at 8r it will have encountered a total of 512,000 galaxies. As mentioned earlier, these numbers may be unrepresentative of the actual number of galaxies encountered, which could be considerably higher.

When an ESW encounters a galaxy some portion of that wavefront is absorbed by the galaxy representing a loss of energy by the wavefront and a corresponding gain of energy by the galaxy. This leads to two further considerations, the first related to direct observations. An ESW will lose energy as it expands in proportion to its volumetric expansion assuming a constant average galactic density . The energy loss will be be insignificant for each individual galactic encounter but the aggregate loss will increase exponentially at large radial distances. An increasing loss of energy with distance is an observed fact (cosmological redshift) for the light from galaxies at large (>100Mly) cosmological distances.

The second consideration is that in some finite time period relative to the emitter all of an ESW’s energy will be absorbed by intervening galaxies (and any other non-luminous baryonic matter). The cosmological range of an ESW is inherently limited – by standard physical considerations. In a sense, their is a notional cosmic wall, relative to the emitter, beyond which its ESWs cannot penetrate.

Reverting once again to the observer’s point of view, it follows that the observers cannot receive electromagnetic signals from sources that have reached the limits of their range – the cosmic wall discussed in the previous paragraph. It also follows directly that the observer is surrounded by a notional cosmic wall, relative only to the observer, beyond which more distant emitters cannot be observed. This wall has no physical significance except from the observer’s local Point-Of-View – it is the aggregate of all the ESW notional walls that surround the observer.

That notional wall is real however in the sense that it defines the limits of any observer’s observational range, just as the wall encountered by an ESW limits the range of its expansion. In both cases we are dealing with a relativistic POV. The ESW just before it reaches its wall encompasses an enormous cosmological volume relative to its source emitter’s location. Observers, just before encountering their notional wall, observe an enormous cosmological volume relative to their three dimensional locale.

Keeping in mind the earlier discussion of the spherical geometry of ESWs, it is interesting to consider that immediately in front of an observer’s notional wall there lies a vast volume of some nominal thickness containing galactic emitters that are still observable. The number of those emitters has increased as R3, while their cosmological redshift has increased as R3, where R is the observer’s radial distance from the remote sources. Beyond that radial distance lies the notional wall at which all observations cease. In that context the observer’s wall can be thought of as a concave, spherical shell against which all foreground observations are projected. Because of the geometrical considerations mentioned, we should expect the most distant visible galaxies to cover the notional, concave, spherical surface of the observer’s cosmological view.

What we arrive at then is a picture much like that proposed by Olber’s Paradox, the only difference being that Olber did not account for the energy loss of light so he expected that the night sky should be uniformly bright in the visible spectrum. What we observe, of course, is a night sky uniformly bright in the microwave spectrum.

The existence of the Cosmic Microwave Background is consistent with the galaxy distribution and energy loss to be expected by using the Expanding Spherical Wavefront framework of standard physics to model the Cosmos we observe. The only assumptions necessary to achieve this result are of an average galactic density on cosmological scales and that the field of galaxies extends sufficiently far for the geometrical arguments to hold.

The Self-Deluded Nature of Modern Cosmology

In a discussion over at ACG, Louis Marmet recently posted this 2022 paper by the cosmologist James Peebles. It is, in essence, another apologia for the current mess in theoretical cosmology offered by one of the prominent purveyors of that mess. The paper is a hot cauldron of disingenuous argumentation based on the usual mathematicist predilection for circular logic that always begins with the premise that the standard model is correct only to arrive at the same conclusion.

Rather than sort through all of the disingenuous arguments presented, I want to focus on a peculiarly blatant factual misrepresentation repeated numerous times throughout the paper. It is this falsehood that the paper’s strained defense of ΛCDM (against the barrage of anomalies besetting the model) relies on:

To reduce the chance of misunderstanding I emphasize that the empirical case that the ΛCDM theory is a good approximation to reality remains compelling. (…)

… the tests have persuasively established that the ΛCDM theory is a good approximation to reality that likely requires refinement. (…)

… the ΛCDM universe has been shown to look a lot like what is observed. (…)

,,, we have a compelling case that the ΛCDM theory is a useful approximation to reality… (…)

… many well-checked tests show that the ΛCDM universe looks much (like) our universe.

Apparently the strategy is the old, repeat the lie often enough and somebody might believe it. The facts of the matter are incontrovertible though. There is no empirical evidence supporting the existence of any of the ΛCDM model’s defining features. The Cosmos we directly observe does not contain any of the following elements:

  • A singularity
  • A Big Bang event
  • An Inflation event
  • Expanding spacetime
  • Dark Matter
  • Dark Energy

Taken together, those are the defining elements of the ΛCDM model. None of them appear in the Cosmos we observe. There is not even a faint family resemblance between the ΛCDM model and the Cosmos we observe. Any claim to the contrary is simply a falsehood.

So how do cosmologists like Peebles wind up convincing themselves that their model universe looks like reality? Obviously empiricism has nothing to do with the matter. It is solely a matter of belief. Modern cosmology is simply a cult of belief. The belief system can be reduced to the following propositions:

  • Mathematics underlies and determines the nature of physical reality. (Mathematicism)
  • The assumptions of the FLRW model that underlies ΛCDM are axiomatically true and therefore the expanding universe of ΛCDM is axiomatically true.
  • Any physical elements that ΛCDM requires physical reality to possess in order to reconcile the model with observations must exist because the model is correct.

To be fair to Peebles here, he does admit that “... the extreme simplicity of the dark sector of the standard ΛCDM cosmology seems unlikely to be better than a crude approximation to reality…“, but that’s a pretty tepid comment considering the “dark sector” (dark matter & dark energy) of the model constitutes 95% of the model’s matter-energy content, while comprising 0% of empirical reality’s matter-energy content. It’s like saying Ptolemy’s epicycles are a crude approximation of physical reality. They are, but that is beside the point.

Both crude approximations are necessary because the underlying models (geocentrism, the expanding universe) are inaccurate representations of physical reality. The crude approximations are necessary because the foundational assumptions of both models are fundamentally wrong.

The standard model of cosmology is a crude approximation of the Cosmos in the same way that Ptolemy’s geocentric cosmology was. The Cosmos is not an “expanding universe” and the Earth is not at its center. It does not matter that cosmologists choose to believe the former but reject the latter. There is no direct empirical evidence to support their belief and the model based on it is palpably nonsensical, being entirely composed of elements (both entities and events) that do not exist in physical reality.

Science is not supposed to be the study of belief systems, it is by definition restricted to the study of physical reality. Modern cosmology as currently practiced is a belief system, not a science. Physical reality bears no resemblance to the standard model of cosmology and vice versa. ΛCDM is an abject scientific failure and it desperately needs to be relegated to the dust bin of history if cosmology is to ever become a real science rather than a playground for self-deluded mathematicists.

Light Cone Confusion In The Here And Now

The light cone graphic below is taken from a Wiki article. The discussion therein gets the basics right, at least with regards to where the concept of a light cone comes from and the dimensional issues with the illustration.

… a light cone (or “null cone”) is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime. (…)

In reality, there are three space dimensions, so the light would actually form an expanding or contracting sphere in three-dimensional (3D) space rather than a circle in 2D, and the light cone would actually be a four-dimensional version of a cone whose cross-sections form 3D spheres (analogous to a normal three-dimensional cone whose cross-sections form 2D circles)

https://en.wikipedia.org/wiki/Light_cone

That’s fine as far as it goes – with two caveats. First of all, the spacetime term should be understood as referring to a relational concept of space and time, not to Wheeler’s causally interacting spacetime. Secondly, contracting spheres of light do not exist in physical reality. Much of the rest of the article is gibberish well encapsulated by the labeling of the illustration which basically renders the image incoherent.

Observer should be labeled Galaxy. A galaxy (TSV) on cosmological scales emits ESWs of light and absorbs ElectroMagnetic Radiation from all remote sources that can reach it.

Future LC = Diverging Light Cone -essentially a projection of an ESW emitted by a galaxy.

Past LC= Converging Light Cone – the aggregate of all the incoming EMR that can reach a galaxy from remote sources.

Hypersphere Of The Present is an imaginary mathematical construct that does not exist in physical reality.

For starters, the image has an observer at the shared apex of the two cones but an observer is not mentioned in the text of the Wiki article. In terms of physical reality an observer is at the apex of a “past light cone” – the observer observes light emitted from distant sources, usually omnidirectional emitters like stars and galaxies.

The “past light cone” is the aggregate of all the inbound radiation from those distant sources onto the observer. Rather than calling it a “past light cone” it would be more accurate to label it a Converging Light Cone, with the understanding that the light cone is a relative, point-of-view phenomenon that has no physical relevance except with respect to the observer.

The “future light cone” does not have an observer at its apex, it has an omnidirectional emitter such as a star or galaxy there. The “future light cone” is an aggregate of the successive expanding spherical wavefronts of electromagnetic radiation emitted by the emitter. The “future light cone” should be more accurately labeled a Diverging Light Cone. The DLC is a physical entity, consisting of sequentially emitted expanding spherical wavefronts of electromagnetic radiation. That understanding flows from Maxwell and Einstein – it is standard physics

Borrowing from radio terminology the emitter/observer can be thought of as a transmitter/receiver or transceiver (TSV). The term transceiver will also be used for an observer-only by considering a non-transmitting observer (such as a human) to be a subcomponent of a transceiver such as a star or galaxy system. With respect to the space and time (relational) labels of the illustration, the apex can be labeled “Here and Now”. So the apex represents the HAN of a TSV.

The rest of the labeling is adequate with the relational nature of space and time caveat being understood. What the illustration then presents us with is a stark refutation of the modern conception of the Cosmos as a simultaneously existing Universe. The TSV (galaxy) is always and only at some unique spatio-temporal location.

The TSV is at the center of the omnidirectionally expanding spherical wavefronts of electromagnetic radiation that it emits – the Diverging Light Cone. A TSV is also at the center of all the electromagnetic radiation that is arriving at its particular place and time from all directions – the Converging Light Cone.

The following statement applies to every possible TSV – everywhere and everywhen. Every TSV is at the center of its own unique “universe” which is just its own unique view of a Cosmos that cannot be simultaneously accessed from any three dimensional HAN.

No TSV can detect the state of a remote TSV that is simultaneous with its own HAN. The finite speed of light prohibits any and all such knowledge. The nearest galaxy to our own, Andromeda, is 2.5 million lightyears distant. We see it in our frame as it existed 2.5 million years ago. We do not have and cannot have any knowledge of its “current” state. Andromeda’s “current” state is not part of the Cosmos we have access to. Andromeda’s HAN does not exist in our unique cosmological frame – Andromeda is always There and Then (TAT) in our cosmological frame.

The two dimensional projection labeled the Hypersurface Of The Present illustrates this clearly. The HAN of any TSV is always and only a local state. All other spatio-temporal locations lie outward – TAT- along the surface of the Converging Light Cone. No TSV has access to the HOTP and in fact the HOTP is only a mathematical/metaphysical construct that has no physical correlate. The HOTP does not exist in physical reality because it represents a universal simultaneity which cannot exist because lightspeed has a finite maximum. There is no physical meaning to the concept of a “universal now” – that is the reason there is no universal frame or “now” in General Relativity.

The apex point represents the only HAN available to any TSV. All remote objects exist only in the transceiver’s past – on the TAT of the Converging Light Cone.

Unfortunately, modern cosmologists are of the opinion that they do have knowledge of this simultaneous something (the HOTP) that does not have any existence in physical reality. That is what the term Universe refers to as employed by cosmologists. They believe themselves to be in possession of knowledge of this imaginary, simultaneously existing Universe that, by the known laws of physics, cannot exist. That 13.8 billion year old entity does not exist by normal scientific standards – it is not an observable.

What modern cosmologists have, of course, is just a mathematical model based on some simplifying assumptions adopted @ 100 years ago at a time when the known Cosmos barely extended beyond our own galaxy. One of the model’s assumptions is that the Cosmos has a “universal” spacetime frame (the FLRW metric) even though, in the context of General Relativity, no universal frame exists. A universal spacetime metric inherently includes a universal time with a universal now. Despite the incongruency, the FLRW metric was applied to the GR field equations. The result of this misbegotten effort speaks for itself:

The Standard Model of Cosmology is a miserable failure; it describes a Universe that looks nothing like the Cosmos we observe. To the extent that it can be said to agree with actual observations, it only arrives at such agreements by insisting that physical reality contains entities and events that physical reality, by all direct empirical evidence, does not appear to contain.

The SMC is junk science or perhaps more accurately, it is a mathematicist confabulation presented as science by people who don’t understand basic physics – that the speed of light in the Cosmos has a finite maximum of @3×108 meters/second. It’s not that they don’t know that fact, they do, but rather they don’t understand what it means in the context of the vast Cosmos we observe. They only know what the SMC tells them and that model, they believe, can’t be wrong because if it were smart people like them wouldn’t believe in it.

In fact though, we have no scientific reason to think that the limited view of the Cosmos we have provides us with knowledge of an unobservable, simultaneously-existing, and expanding Universe. The consensus belief of cosmologists that they have such knowledge can be attributed to the fever dream of mathematicism that deeply infects the theoretical physics community. Modern cosmology is a mess.

Science is not perfect. Mistakes are to be expected in science. The Standard Model of Cosmology is a mistake. The model’s foundational assumption of an “expanding universe” is a mistake. It is a mistake in the same way that geocentrism was a mistake. It is fundamentally wrong about the nature of the Cosmos, It is time to move on from the expanding universe model. I’ll give the last word to the astrophysicist Pavel Kroupa:

Thus, rather than discarding the standard cosmological model, our scientific establishment is digging itself ever deeper into the speculative fantasy realm, losing sight of and also grasp of reality in what appears to be a maelstrom of insanity.

https://iai.tv/articles/our-model-of-the-universe-has-been-falsified-auid-2393

10May24 Acknowledgement: My original concept for the apex of a light cone was that it should be labeled “Here”. In an exchange with the mathematician Robert A. Wilson he made the invaluable suggestion that the apex be called “Here and Now”.