Author Archives: EmpiricalWarrior

Gravitational Redshift & Expanding Spherical Wavefronts

An earlier post on Expanding Spherical Wavefronts made a qualitative argument that an ESW should sustain an energy loss as it expands through the Cosmos. In a more recent post it was argued that gravitational effects are a consequence of matter/electromagnetic-radiation interactions. As a follow-up, this post offers a quantitative demonstration that standard gravitational-redshift-based-math can be applied to an ESW to generate a cosmological redshift correlated with distance:

Terms & Definitions:
z = ((1-(rs/Resw))^(-1/2))-1
rs = Schwarzschild radius = 2GM/c2
Res = successive radii for an Expanding Spherical Wavefront in lightyears
Resw = successive radii for an ESW in meters
M = Calculated mass for a sphere at selected radii, assuming an average cosmological density of 1E-26 kg/m3. The average density is a free parameter in the model but the results are highly sensitive to this particular value. A variation of +/- 10% produces outcomes that seem unrealistic.

All in all, this is nothing more than a proof of concept – but it is an intriguing one. Two pieces of standard physics can be put together to produce a cosmological redshift-distance relation that is similar to the one presented by the standard model. The three graphs of redshift-distance are taken from the table and illustrate scale differences over the range of the table. A short discussion follows the 3rd graph.

Res (Ly) Resw (meters) rs M (kg) z
17.50E+077.08E+232.22E+191.48E+461.57E-05
21.00E+089.44E+235.25E+193.52E+462.78E-05
32.50E+082.36E+248.20E+205.50E+471.74E-04
45.00E+084.72E+246.56E+214.40E+486.96E-04
51.00E+099.44E+245.25E+223.52E+492.79E-03
61.50E+091.42E+251.77E+231.19E+506.32E-03
72.00E+091.89E+254.20E+232.82E+501.13E-02
82.50E+092.36E+258.20E+235.50E+501.79E-02
93.00E+092.83E+251.42E+249.50E+502.60E-02
103.50E+093.30E+252.25E+241.51E+513.59E-02
114.00E+093.77E+253.36E+242.25E+514.77E-02
124.50E+094.25E+254.78E+243.21E+516.16E-02
135.00E+094.72E+256.56E+244.40E+517.78E-02
145.50E+095.19E+258.74E+245.85E+519.65E-02
156.00E+095.66E+251.13E+257.60E+511.18E-01
166.50E+096.13E+251.44E+259.66E+511.43E-01
177.00E+096.61E+251.80E+251.21E+521.73E-01
188.00E+097.55E+252.69E+251.80E+522.46E-01
199.00E+098.49E+253.83E+252.57E+523.49E-01
201.00E+109.44E+255.25E+253.52E+525.02E-01
211.10E+101.04E+266.99E+254.68E+527.50E-01
221.20E+101.13E+269.07E+256.08E+521.24E+00
231.30E+101.23E+261.15E+267.73E+523.10E+00
241.31E+101.24E+261.18E+267.91E+523.71E+00
251.32E+101.25E+261.21E+268.09E+524.74E+00
261.33E+101.25E+261.24E+268.28E+527.00E+00
2713,350,000,0001.26E+261.25E+268.37E+521.00E+01
2813,400,000,0001.26E+261.26E+268.47E+523.49E+01
2913,405,000,0001.26E+261.26E+268.48E+521.75E+02
3013,405,100,0001.26E+261.26E+268.48E+522.39E+02
3113,405,200,0001.26E+261.26E+268.48E+526.48E+02
Graph1 Rows 1-6 of table
Graph2 Rows 1-26
Graph3 Rows 1-27

Discussion

There are several interesting aspects to these results. The initial radius of 75M lightyears is arbitrary and lies just inside the 100Mly radius that can be considered to encompass the “local” Cosmos. The final radius is in the asymptotic range imposed by the math.

What is happening in the math is the Schwarzschild radius (rs) is catching up with the Expanding Spherical Wavefront being modeled. This is because the rs is increasing in proportion to the enclosed mass which is increasing as Resw3 while the ESW itself is only increasing as Resw2. This is the inverse of what happens (according to the Schwarzschild solution to GR) in the case of a mass undergoing gravitational collapse. In that situation the collapsing body converges inward toward the rs as the mass remains constant.

It should be noted that the rs is an artefact of the model; it is a coordinate singularity and not physically significant. This indicates that the model has broken down at that point by virtue of having produced a division by zero result. It can be argued that this is a consequence of the model not taking into account the variation in the speed of light in a gravitational gradient.

It is also striking that the redshift of the ESW model goes asymptotic at approximately the same cosmological distance (13.4Gly) that the standard model redshift does (13.8Gly). The difference is that in the ESW model the redshift is a consequence of the energy loss to the expanding spherical wavefront attributable to its gradual absorption by intervening galaxies. In the standard model the energy loss is a consequence of a model-inferred but unobservable “universal expansion” – which leaves the lost energy physically unaccounted for.

One other point of note, in the ESW account of redshift the cosmological conditions at the source of the redshifted light are assumed to be approximately the same as they are in our “local Cosmos”. In the standard model, of course, the cosmological conditions at the largest implied redshift distances are thought to be significantly different due to the nature of the evolving “expanding Universe” that the model assumes. The recent JWST observations contradict the standard model’s picture of an evolving “Universe”.

It is not polite to discuss DWRT

I recently commented on an astrophysicist’s blog regarding a claim I consider ahistorical and inaccurate. The comment was deleted. I don’t have a problem with that – a blog is a personal space. However, I was responding to a specific claim about the origin of General Relativity that is both common and false. What follows is the one paragraph remark I was commenting on and my now-deleted response which admittedly ranges a bit further than simply refuting the quote requires:

The equivalence of inertial mass with gravitational charge is a foundational principle of general relativity. There are flavors of GR in which it falls out explicitly, e.g., Yilmaz’s gravity. But it is basically an assumption.

The equivalence of inertial and gravitational mass was an observed and measured fact known to Galileo and Newton. It was not an assumption of GR. The mid-20th century extensions of what is now called the Weak Equivalence Principle were little more than conjectures of mathematical convenience “suggested” by Robert H. Dicke. They had nothing to do with the development of GR.

Along with John A. Wheeler’s aphoristic, empirically baseless, invocation of a causally interacting spacetime, Dicke’s two extensions of the WEP were surreptitiously hung on Einstein’s General Relativity producing a grotesque variant that by rights should be known as Dicke-Wheeler Relativity Theory. It is DWRT that has been the officially taught version for the better part of 50 years although the D-W distortions are almost always attributed to Einstein. He would have puked.

It is DWRT that prompts otherwise rational people to insist that, despite theoretical and empirical evidence to the contrary, the speed of light is some sort of universal constant. It is DWRT that promotes the false claim that Einstein explained gravity as being caused by the curvature of space.

As far as space itself goes, it is a relational concept exactly like distance. That is all the evidence supports. In fact, space is best understood as the aggregate of all distances that separate you from all the other things in the Cosmos that aren’t you. Substantival space is a mathematicist fiction that has no scientific basis.

Throughout the Cosmos, everywhere within our observational range where there is no matter there is only electromagnetic radiation. That is an empirical fact. Everywhere people imagine they see space there is electromagnetic radiation. At any given 3D location that radiation is flowing omnidirectionally from all the omnidirectional emitters (stars and galaxies) within range. That is what we observe and that is how we observe.

As Mach surmised we are connected to the rest of the Cosmos or at least to those objects within range. That non-simultaneous connection is via electromagnetic radiation – that is what’s there. Until recently no one had bothered to do a full survey of what might be called the Ambient Cosmic Electromagnetic Radiation. The authors of this interesting paper seem to think they are the first. Everybody else was too busy looking for some dark stuff apparently.

Modern theoretical physics is all theory and no physics; it consists of nothing but the unrelenting and fruitless effort to prop up two inert century old models whose assumptions of mathematical convenience were lapped by physical reality decades ago. Tinkering with obsolete mathematical models does not constitute a scientific endeavor even if that is all that has been taught for the last 40 years.

Inertia, Gravity, & Electromagnetic Radiation

This very interesting paper originally caught my attention because it demonstrates that Einstein rejected what the paper calls the “geometrization” of gravity and did so throughout his career not just at the end of his life. On a recent rereading I was struck by something else which is easy to forget – the subtly of Einstein’s thought.

The geometrization of gravity is an awkward term because it elides the central problem which is the reification of spacetime. It is well known that Einstein’s Relativity Theory employs the geometric math of Gauss-Riemann. What is at issue is whether that geometric math refers to a physical spacetime that causally interacts with matter and energy (electromagnetic radiation). Many argued that it did while Einstein rejected that interpretation as unphysical and uninformative.

Beyond the issue of what Einstein did not believe, the paper illuminates a seldom discussed subject – what Einstein did believe Relativity Theory accomplished, the unification of gravity and inertia. This unification is not found in the famous gravitational equation of General Relativity but in the lesser known geodesic equation. From the paper:

We found that (i) Einstein thought of the geodesic equation in GR
as a generalisation of the law of inertia; (ii) in which inertia and gravity
were unified so that (iii) the very labeling of terms as ‘inertial’ and
‘gravitational’ respectively, becomes in principle “unnecessary”, even if
useful when comparing GR to Newtonian theory.

While it is well understood that the Equivalence Principle* played a role in Einstein’s thought process while developing GR the importance of the geodesic equation as a formal realization of the EP is certainly not widely acknowledged as far as I am aware. The implications of that unification are profound.

One of the peculiarities of the modern theoretical physics community is their apparent disinterest in determining the physical cause of the gravitational effect. The reason for this disinterest is a philosophical attitude known as instrumentalism – if some math describes an observed outcome adequately then a causal explanation is superfluous. Instrumentalism is a variant of the scientifically baseless philosophical belief called mathematicism.

The purpose of science is to investigate the nature of physical reality, not to sit around fiddling with poorly constructed mathematical models of physical reality that do not remotely make sense when you inquire of the model, What does the math mean with regard to the physical system being modeled? The answer that typically comes back is a peculiar kind of bullshit that can be thought of as Big Science Babble.

Superposition Of States is Exhibit A of BSB on the quantum scale. Almost all quantum scale babble winds up at SOS or at Wave-Particle Duality. SOS tells us that an electron is never at a particular position until it is observed.

When an electron is detected it is always, not surprisingly, at a particular location but according to the mathematicists at all other times when not being observed the electron is in a SOS – it is spread out over all of its possible locations as described by some math (the wavefunction). How do they know this? Because the math doesn’t know where the electron is, so it can’t be anywhere in particular. Sure, of course.

BSB is rife on the cosmological scale. According the standard model of cosmology the Cosmos is 95% made up of some invisible stuff while the stuff we actually observe provides the remaining 5%. How do scientists know this invisible stuff is there? Because it has to be there to make the Big Bang model work and everybody knows the BB model is correct because they said so in graduate school, so the invisible stuff has to be there, like it or not. Sure, of course.

At the root of all BSB, of course, is mathematicism. A mathematical model dictates an absurd story about physical reality which we are then supposed to believe without evidence because math determines the nature of physical reality. If mathematicists with pieces of paper saying they are scientists believe in BSB, shouldn’t you? No, you should not.

Physical reality is an afterthought to mathematicists for whom only math is of interest. That’s why no effort is being expended in the scientific academy to understand the physical cause of gravity; research funding is controlled by mathematicists. And since they already have some math that kind of works (as long as reality contains things it does not appear to contain), well that’s good enough – for mathematicists.

In real science, physical events and behaviors occur as a consequence of physical interactions. Those interactions can be matter/matter (collision), matter/radiation (emission, absorption, reflection), or radiation/radiation (interference) in nature. There is a good argument to be made that all observed gravitational and inertial effects arise as a consequence of matter/radiation interactions:

  1. By observation, everywhere in the Cosmos that there is no matter, there is electromagnetic radiation.
  2. Light traversing a gravitational field behaves as it does in a transparent medium with a density gradient. All approximately spherical gravitating bodies emit electromagnetic radiation omnidirectionally with a density gradient that falls off as 1/r2.
  3. The gravitational effect surrounding a spherical gravitating body falls off as 1/r2.
  4. The gravitational field then is just the Ambient Local Electromagnetic Radiation field surrounding a gravitating body.
  5. In the intergalactic regions, far from any significant gravitating bodies there is only the ubiquitous Ambient Cosmic Electromagnetic Radiation.
  6. The ACER is, to a good approximation, isotropic and this cosmically sourced electromagnetic field does not have a density gradient. It can be thought of as the inertial field.
  7. This unified physical account of gravity and inertia is consistent with Einstein’s mathematical description of a unified gravity and inertia in the geodesic equation.**

_____________________________________________________________________________________________

*The Equivalence Principle Einstein employed is now known, since the mid 20th century, as the Weak Equivalence Principle to distinguish it from later, dubious extensions, added with little scientific justification after Einstein’s death in 1955.

**The forgoing does not constitute conclusive evidence that gravity is an effect of matter/electromagnetic-energy interactions – but it is evidence based on empirical observations. In contrast, there is no empirical evidence for the concept of gravity as a fundamental force.

Forces are themselves not things, they are effects. Of the four fundamental forces claimed by science to exist, only the electromagnetic force has any empirical basis. In fact though electromagnetic radiation is no more a “force” than a golf club is a “force”.

A golf club exerts a force on a golf ball by striking it, resulting in an acceleration of the ball. That applied force can be quantified using known mechanical laws. The term force describes that interaction but force is just a descriptive term for the interaction between the golf club and golf ball, it is not the golf club nor is it a separate thing in itself. The same analysis applies to EMR; it is not a force but it can exert a force when it strikes a physical object.

Photons are not particles

In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a “discrete quantity composed of an integral number of finite equal parts”, which he called “energy elements”. In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets. He called such a wave-packet a light quantum.
https://en.wikipedia.org/wiki/Photon (20Jun24)

Photon energy is the energy carried by a single photon. The amount of energy is directly proportional to the photon’s electromagnetic frequency and thus, equivalently, is inversely proportional to the wavelength. The higher the photon’s frequency, the higher its energy. Equivalently, the longer the photon’s wavelength, the lower its energy… The photon energy at 1 Hz is equal to 6.62607015×10−34 J
https://en.wikipedia.org/wiki/Photon_energy (20Jun24)

The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value h = 6.62607015×10−34 J⋅Hz−1.
https://en.wikipedia.org/wiki/Planck_constant (20Jun24)

The meaning of the foregoing should be clear. Despite the claims of particle physicists that the photon is a particle, it was in its original conception and is in its current quantitative description a wave phenomenon. A photon is not a particle like a proton or a billiard ball. A photon is never at rest with respect to any material body; it is always moving at the local speed of light with respect to all material bodies. A photon does not behave like a three dimensional particle, it is a wave quantum.

A wave quantum is the smallest possible wave; it is a wave of one wavelength as defined above. The illustration below is of a wave consisting of two consecutive photons. This is a poor representation of the reality of electromagnetic radiation which on astronomical and cosmological scales is emitted omni-directionally by stars, galaxies and other luminous bodies. To a local observer though light arriving from a distance seems to consist of streams or rays of light. This image is of a subsection of such a stream.

Electromagnetic radiation does not consist of a stream of tiny particles simply because Max Planck was forced to treat the emission of light as having a discrete minimum. What is described by the math is a single wavelength which is the minimum for a wave of any frequency. Half waves and quarter waves etc. don’t exist.

That does not mean a wave with half the wavelength of a longer wave cannot exist, just that for any given frequency a single complete wave cycle of one wavelength defines the minimum wave energy. Converting this wave minimum to a “particle” was a categorical error and it has a formal name, QED or Quantum Electrodynamics.

In Richard Feynman’s 1985 book QED, based on four popular lectures he had given a few years earlier, he makes this rather odd case for the light as particle “theory:

The strange phenomenon of partial reflection by two surfaces can be explained for intense light by a theory of waves, but the wave theory cannot explain how the detector makes equally loud clicks as the light gets dimmer. Quantum electrodynamics “resolves” this wave-particle duality by saying that light is made of particles (as Newton originally thought) but the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the probability that a photon will hit a detector, without offering a good model of how it actually happens.

So to summarize that last sentence, saying light is made of particles was a great advancement for science that represented a retreat by physics into incoherence. I can’t argue with that. It is also not clear why the particle theory is superior to the wave quantum understanding of Einstein. Surely wave mechanics could have been modified to accommodate the fact that one wavelength is the minimum wave.

Instead Feynman goes on to describe a strange system resembling vector addition where the direction of “arrows” representing possible particle paths is determined by a frequency counter clock, in a backdoor maneuver to introduce wave-like behavior into the particle model so it can mimic wave interference patterns. This fits nicely with the standard quantum babble about a superposition of states, the condition where a particle’s position cannot be predicted except as a probability distribution which is interpreted to mean that the particle is in many positions at once. Thus the retreat into incoherence.

The particle theory of light is just another screwup of 20th century theoretical physics (there were quite a few). It should be put on the historical-curiosity shelf along with the Big Bang next to geocentric cosmology. Future historians can point to these physically absurd dead-end theories as textbook examples of how not to do science. Theory driven physics always winds up as empirically baseless metaphysical nonsense; the human imagination has never been a good guide to physical reality.

Groupthink, Dogma & Inertia

Re: this Ethan Siegel article.

If we ever want to go beyond our current understanding, any alternative theory has to not only reproduce all of our present-day successes, but to succeed where our current theories cannot. That’s why scientists are often so resistant to new ideas: not because of groupthink, dogma, or inertia, but because most new ideas never clear even the first of those epic hurdles, and are inconsistent with the established data we already possess.

Strangely enough that is a pretty good description of why modern theoretical physics appears to be in fact quite dogmatic and inert. If new ideas can be dismissed for not immediately clearing the “epic hurdles” that theorists prescribe, then they amount to nothing more than preemptive barriers to new ideas. This greatly favors the orthodoxy.

No research funding is available for new ideas that don’t hurdle the barriers. They function as a defensive bulwark protecting the standard model from unorthodox incursions.

The only new ideas that are quickly adopted are those that fit the old model to new data. The favored model can be continuously revised (new epicycles, dark matter, etc.) into agreement with unpredicted new data. The “epic hurdles” only apply to new ideas that challenge the orthodoxy. New ideas needed to salvage the old model are adopted without reservation.

And so here we are, stuck with a cosmological model that is in an exactly analogous situation to that which existed with respect to Ptolemaic Cosmology prior to Kepler. Ptolemy’s model could be massaged into agreement with most observations and repeatedly tinkered into agreement with new, more accurate data, but only to some degree. So it is with the Big Bang model. Math is like that, but math is not physics.

Like PC, the BB model has at its core two fundamental misperceptions about the nature of physical reality. PC assumed geocentrism and that bodies orbited the earth in perfect circles. Only once geocentrism and perfect circles were set aside could science advance.

The two misperceptions underlying the BB are first, that the Cosmos can be mathematically modeled as a simple, unitary, simultaneously-existing entity – a Universe. Secondly, the observed cosmological redshift can be attributed to some form of recessional velocity. The inevitable consequence of those assumptions is an Expanding Universe model – the BB.

The first assumption has been falsified by known physics. The speed of light has a finite maximum such that it requires light from the most distant objects we currently observe more than 10 billion years to reach us. It therefore follows that we have no knowledge of the simultaneous state of the Universe because it is physically impossible to have such knowledge. It is a metaphysical conceit, not a demonstrable scientific fact, that such a Universe even exists.

The second assumption is false because it depends on the first being true. It is meaningless to assume something that requires something that does not exist (the Universe) to have a property (expansion) that cannot possibly be observed.

As was the case in Kepler’s time, the only solution to the dogmatic inertia that cripples modern cosmology is to discard the foundational assumptions of the standard model and start over, by rigorously assessing the data on hand without the distorting blinders of the old model.

Over the last century there has been an enormous increase in our knowledge of the Cosmos but due to the dogmatic inertia of the scientific academy all of that new knowledge has not generated any new ideas because it has all been run through the meat-grinder of the standard model. The result has been a dogmatic and inert BB model that describes in excruciating detail a Universe that does not exist and bears no resemblance to the Cosmos we actually observe.

Kepler had an advantage that modern researchers do not – he was not dependent on the dogmatic and inert modern scientific academy for funding. Modern cosmology will remain an unscientific digression into prescientific orthodoxy until it’s research funding is driven by the physics researchers investigating, by observation and measurement, physical reality.

Theoretical modelers in such a system would be required to produce models that reflect the physical phenomenon uncovered by physics researchers. The math must follow the physics if cosmology is to be a science.

The failed Big Bang model needs to be consigned to the dust bin of history where it can serve as an object lesson in how not to do science.

Mathematicist Follies & the DWRT

Here are some choice tidbits from a recent Tim Anderson article titled Zero-point energy may not exist. I’m always supportive of any effort to drag theoretical physics back into contact with empirical reality so the suggestion that ZPE may not exist is at least promising. It even suggests the possibility that modern theoretical physics might emerge from its self-imposed exile in Plato’s cave, the cave-of-choice in this case being mathematicism.

In reading the article any hope of a scientific restoration is dashed, as one is quickly immersed in sea of mathematicist illogic. Here for instance is the “reasoning” that underlies the ZPE concept:

…it means that nothing has non-zero energy and because there is an infinite amount of nothing there must be an infinite amount of energy.

While it is clear that the author is distancing himself from the ZPE concept, that account of the underlying “reasoning” gives rise to the simple question, how did such flamboyantly illogical nonsense gain any traction in the scientific community? The answer of course is mathematicism which is itself a flagrantly illogical proposition, just not recognized as such by the denizens of the mathematicist cave. Then there is this little gem (emphasis added):

Quantum field theory, which is the best theory of how matter works in the universe that we have, suggests that all matter particles are excitations of fields. The fields permeate the universe and matter interacts with those fields in various ways. That is all well and good of course. We are not questioning that these fields exist. The question is whether a field in a ground state has any measureable effect on matter.

So in their “best theory” the universe is permeated by many fields and matter is an excitation of those fields. Physical reality however only contains one observed field and that is the electromagnetic field which is the aggregate of all the electromagnetic radiation that permeates the Cosmos. That radiation is constantly being emitted by all the luminous matter that also permeate the Cosmos. There are no additional observed fields as described by QFT. 

Despite the fact that the QFT fields are not observed the author does not wish to question their existence. Why? Mathematicism, of course. If a math model says something is there and physical reality declines to offer any evidence in support of such a conjecture, the mathematicist position is that the math is correct and reality is perversely withholding the evidence.

Imagine we have a sensitive Hawking radiation detector orbiting a black hole. The detector is in a state of free fall, meaning that it experiences no gravitational forces on it.

This last bit invokes a wholly imaginary thought experiment involving imaginary radiation emitted by an imaginary black hole. Without any empirical basis or logical connection to known physics, it has no scientific significance even if the “experiment” somehow reflects badly on the ZPE concept. In that, it only amounts to an illogical argument refuting an illogical concept.

The second sentence also presents a widely promulgated claim that has no basis in physics. The idea that an observer or detector in free fall experiences no gravitational forces on it is purely unphysical nonsense. An observer or detector can only be in a state of free fall if they are experiencing a gravitational force. The typical basis for this claim is that the observer is prohibited from making observations that would clearly show the presence of a gravitating body and thus demonstrate the presence of a gravitational field.

Einstein is usually credited with this view but in fact his conception of the equivalence principle was highly constrained and did not extend to fundamentally illogical claims like the one made above. The version of the equivalence principle Einstein employed is now called the Weak Equivalence Principle.

The two extensions of the equivalence principle contrived and adopted after Einstein’s death, the disingenuously named Einstein EP (he had nothing to do with it) and the Strong EP have no logical, scientific or theoretical justification. They were merely conjectures of mathematical convenience proposed by the physicist Robert H. Dicke, who along with his colleague John A. Wheeler, concocted a distorted variant of Einstein’s Relativity Theory. That variant is presented today as Einstein’s RT but it is a separate theory and should have it’s own name — Dicke-Wheeler Relativity Theory. 

It is in DWRT that you will find the EEP and SEP as well as a reified version of spacetime which is said to causally interact with matter and energy causing the gravitational effect and facilitating the Expansion of the Universe. There is no empirical evidence supporting those ad hoc additions to ERT. They are simply mathematicist conjectures that have no scientific basis or logical connection to physical reality. In modern theoretical physics though, they are treated as axioms — true by definition. 

Mathematicism is the principle driver of the Crisis in Physics. The reason for this is simple: math is not physics. The controlling paradigm in modern theoretical physics, however, is that math is the basis of physics and mathematical models determine the nature of physical reality. That paradigm is a philosophical belief that has no scientific basis.

As a consequence of mathematicism theoretical physicists espouse two standard models that describe a physical reality containing a large number of entities and events that are not part of the physical reality we actually observe. Modern theoretical physics does not constitute a science so much as a cult of belief

You have to believe in the expanding universe, in dark matter and dark energy, in quarks and gluons. You have to believe that the speed of light in a vacuum is a universal constant even though it cannot be measured as such and is not constant in General Relativity — at least according to Einstein.* You have to believe in these things because they are not demonstrably part of physical reality. Scientists don’t traffic in beliefs but mathematicists do and so there is a Crisis in Physics

 * The speed of light is a universal constant according to Dicke-Wheeler Relativity Theory.

Simultaneously published at Medium.

Denial Of The Deluded

The New York Times has a recent guest article entitled The Story of Our Universe May Be Starting to Unravel. It is in some ways good to see doubts about the Standard Model of Cosmology surfacing in the mainstream press. What the authors, an astrophysicist and a theoretical physicist, have on offer though is some weak tea and a healthy dose of the usual exculpatory circular reasoning.

The authors do point out some of the gaping holes in the SMoC’s account of the Cosmos:

  • normal” matter — the stuff that makes up people and planets and everything else we can see — constitutes only about 4 percent of the universe. The rest is invisible stuff called dark matter and dark energy (roughly 27 percent and 68 percent).
  • Cosmic inflation is an example of yet another exotic adjustment made to the standard model. Devised in 1981 to resolve paradoxes arising from an older version of the Big Bang, the theory holds that the early universe expanded exponentially fast for a fraction of a second after the Big Bang

That’s a start I guess but then we get this absurd rationalization for simply accepting the invisible and entirely ad hoc components of the SMoC:

There is nothing inherently fishy about these features of the standard model. Scientists often discover good indirect evidence for things that we cannot see, such as the hyperdense singularities inside a black hole.

Let’s be clear here about this so-called “indirect evidence“; all of it essentially boils down to model dependent inference. Which is to say, you cannot see any evidence for these invisible and/or impossible (singularities) things unless you peer through the distorting lenses of the simplistic mathematical models beloved of modern theoretical physicists. People who believe that mathematical models determine the nature of physical reality are not scientists, they are mathematicists and they are deluded – they believe in things that, all the evidence says, are not there.

Not only are mathematicists not scientists, they are not good mathematicians either. If they were good at math and found that one of their models was discordant with physical observations they would correct the math to reflect observations. What mathematicists do is correct reality to fit their math. That is where the dark sector (dark matter & dark energy) come from – they added invisible stuff to reality to make it fit their broken model.

A mathematician did come up with a correction to Newtonian dynamics that had been inaccurately predicting the rotation curves of disk galaxies. Mordehai Milgrom developed MOND (Modified Newtonian Dynamics) in the 1980s and it was quite successful in predicting galactic disk dynamics.

Unfortunately the mathematicists had already off-loaded their problem onto reality by positing the existence of some unseen dark matter. All you have to know about the state of modern theoretical physics is that after 40 years of relentless searching and failure to discover any empirical evidence there remains a well-funded Dark Matter cottage industry, hard at work seeking evidence for the non-existent. This continuing search for that which is not there represents a betrayal of science.

It might appear that the authors here are not mathematicists given that they seem to be suggesting that the SMoC is not sacrosanct and needs to be reconsidered in its entirety:

We may be at a point where we need a radical departure from the standard model, one that may even require us to change how we think of the elemental components of the universe, possibly even the nature of space and time.

Sounds promising but alas, the reconsideration is not to be of the foundational assumptions of the model itself but only certain peripheral aspects that rest on those assumptions such as “…the assumption that scientific laws don’t change over time.” Or they suggest giving consideration to to this loopy conjecture: “…every act of observation influences the future and even the past history of the universe.

What the authors clearly do not wish to reconsider is the model’s underlying concept of an Expanding Universe. That assumption – and it is only an assumption of the model – was adopted 100 years ago at a time when it was still being debated whether the galaxies we observed were a part of, or separate from, the Milky Way. It was, in other words, an assumption made in ignorance of the nature and extent of the Cosmos as we now observe it. The authors treat the Expanding Universe concept as though it had been handed down on stone tablets by some God of Mathematicism:

A potent mix of hard-won data and rarefied abstract mathematical physics, the standard model of cosmology is rightfully understood as a triumph of human ingenuity. It has its origins in Edwin Hubble’s discovery in the 1920s that the universe was expanding — the first piece of evidence for the Big Bang. Then, in 1964, radio astronomers discovered the so-called Cosmic Microwave Background, the “fossil” radiation reaching us from shortly after the universe began expanding.

For the record, Edwin Hubble discovered a correlation between the redshift of light from a galaxy and its distance. That is all he discovered. It is an assumption of the model that the redshift is caused by some form of recessional velocity. It is also an assumption of the abstract mathematical physics known as the FLRW equations that the Cosmos is a unified, coherent, and simultaneously existing entity that has a homogenous and isotropic matter-energy distribution. Both of those assumptions have been falsified by observations and by known physics.

Also for the record it should be noted that prior to the discovery of the Cosmic Microwave Background Radiation predictions by Big Bang cosmologists ranged over an order of magnitude that did not encompass the observed 2.7K value. At the same time scientists using thermodynamic considerations made more accurate predictions.

The belief in an Expanding Universe has no scientific basis. It is a mathematicist fantasy, and until that belief is set aside, the Standard Model of Cosmology will remain a crappy, deluded fairy tale that does not in any objective way resemble the magnificent Cosmos we observe.

Spherical Wavefronts & Cosmological Reality

Expanding Spherical Wavefronts are standard physics:

Credit: Gong Gu, https://fr.slideserve.com/oster/ece341-electromagnetic-fields-powerpoint-ppt-presentation

The Expanding Spherical Wavefronts depicted above are physical entities. They illustrate the behavior of light as emitted omnidirectionally by a “point source” emitter. Point source, as always in physics, is a mathematical term of art, there being no physical meaning to the mathematical concept of a dimensionless “point”. A source can be treated mathematically as point-like, however, for an observer distant enough that the emitting source’s dimensions are small relative to the radial distance to the observer.

In the foregoing sense, a typical galaxy can be treated as a point source at large ( >100 million lightyears) distance. The nested shells of the model can be considered as representing either successive positions of a single wavefront over time or an instantaneous representation of continuously emitted successive wavefronts from a typical, omnidirectional emitter such as a galaxy.

This nested shell model can also be invoked to illustrate a ubiquitous inverse electromagnetic phenomenon. Replacing the emitter with an observer, the nested shells can be seen as representing notional spheres existing at various, arbitrary radial distances from the observer. At any given radial distance of cosmological significance from the observer the notional shell will have on it some number of galaxies. Such a notional shell also defines a notional volume which must also contain some number of galaxies. This geometrical situation is only relevant to the observer; it is not, as in the ESW case, a physical entity.

Elaborating a bit, we can define a cosmological radial unit of r = 100 million lightyears. That radial unit then defines a sphere with a surface area proportional to r2 and a volume proportional to r3, For illustrative purposes we make a simplifying (and unrealistically low) assumption that any r3 unit volume contains on average 1000 galaxies.

Observers whose observational range extends out 1 radial unit will observe their “universe” to contain 1000 galaxies. If those same observers improve their technology so that their range of observation extends to 2 radial units they will find themselves in a “universe” that contains 8000 galaxies. If their range doubles again to 4r their “universe” will now contain 64,000 galaxies.

Every time the observational range doubles the total number of galaxies contained in the newly expanded “universe” will increase by a factor of 8 or 23. Of that 8-fold increase 7/8 of the total number of galaxies will lie in the newly observable portion of the total volume. This all follows from straightforward geometrical considerations.

Now let us return to the shell model centered on an omnidirectional emitter. The same geometric considerations apply here but this time with respect to an Expanding Spherical Wavefront. At 1r a wavefront will have encountered 1000 galaxies, at 4r, 64,000 galaxies and at 8r it will have encountered a total of 512,000 galaxies. As mentioned earlier, these numbers may be unrepresentative of the actual number of galaxies encountered, which could be considerably higher.

When an ESW encounters a galaxy some portion of that wavefront is absorbed by the galaxy representing a loss of energy by the wavefront and a corresponding gain of energy by the galaxy. This leads to two further considerations, the first related to direct observations. An ESW will lose energy as it expands in proportion to its volumetric expansion assuming a constant average galactic density . The energy loss will be be insignificant for each individual galactic encounter but the aggregate loss will increase exponentially at large radial distances. An increasing loss of energy with distance is an observed fact (cosmological redshift) for the light from galaxies at large (>100Mly) cosmological distances.

The second consideration is that in some finite time period relative to the emitter all of an ESW’s energy will be absorbed by intervening galaxies (and any other non-luminous baryonic matter). The cosmological range of an ESW is inherently limited – by standard physical considerations. In a sense, their is a notional cosmic wall, relative to the emitter, beyond which its ESWs cannot penetrate.

Reverting once again to the observer’s point of view, it follows that the observers cannot receive electromagnetic signals from sources that have reached the limits of their range – the cosmic wall discussed in the previous paragraph. It also follows directly that the observer is surrounded by a notional cosmic wall, relative only to the observer, beyond which more distant emitters cannot be observed. This wall has no physical significance except from the observer’s local Point-Of-View – it is the aggregate of all the ESW notional walls that surround the observer.

That notional wall is real however in the sense that it defines the limits of any observer’s observational range, just as the wall encountered by an ESW limits the range of its expansion. In both cases we are dealing with a relativistic POV. The ESW just before it reaches its wall encompasses an enormous cosmological volume relative to its source emitter’s location. Observers, just before encountering their notional wall, observe an enormous cosmological volume relative to their three dimensional locale.

Keeping in mind the earlier discussion of the spherical geometry of ESWs, it is interesting to consider that immediately in front of an observer’s notional wall there lies a vast volume of some nominal thickness containing galactic emitters that are still observable. The number of those emitters has increased as R3, while their cosmological redshift has increased as R3, where R is the observer’s radial distance from the remote sources. Beyond that radial distance lies the notional wall at which all observations cease. In that context the observer’s wall can be thought of as a concave, spherical shell against which all foreground observations are projected. Because of the geometrical considerations mentioned, we should expect the most distant visible galaxies to cover the notional, concave, spherical surface of the observer’s cosmological view.

What we arrive at then is a picture much like that proposed by Olber’s Paradox, the only difference being that Olber did not account for the energy loss of light so he expected that the night sky should be uniformly bright in the visible spectrum. What we observe, of course, is a night sky uniformly bright in the microwave spectrum.

The existence of the Cosmic Microwave Background is consistent with the galaxy distribution and energy loss to be expected by using the Expanding Spherical Wavefront framework of standard physics to model the Cosmos we observe. The only assumptions necessary to achieve this result are of an average galactic density on cosmological scales and that the field of galaxies extends sufficiently far for the geometrical arguments to hold.

The Self-Deluded Nature of Modern Cosmology

In a discussion over at ACG, Louis Marmet recently posted this 2022 paper by the cosmologist James Peebles. It is, in essence, another apologia for the current mess in theoretical cosmology offered by one of the prominent purveyors of that mess. The paper is a hot cauldron of disingenuous argumentation based on the usual mathematicist predilection for circular logic that always begins with the premise that the standard model is correct only to arrive at the same conclusion.

Rather than sort through all of the disingenuous arguments presented, I want to focus on a peculiarly blatant factual misrepresentation repeated numerous times throughout the paper. It is this falsehood that the paper’s strained defense of ΛCDM (against the barrage of anomalies besetting the model) relies on:

To reduce the chance of misunderstanding I emphasize that the empirical case that the ΛCDM theory is a good approximation to reality remains compelling. (…)

… the tests have persuasively established that the ΛCDM theory is a good approximation to reality that likely requires refinement. (…)

… the ΛCDM universe has been shown to look a lot like what is observed. (…)

,,, we have a compelling case that the ΛCDM theory is a useful approximation to reality… (…)

… many well-checked tests show that the ΛCDM universe looks much (like) our universe.

Apparently the strategy is the old, repeat the lie often enough and somebody might believe it. The facts of the matter are incontrovertible though. There is no empirical evidence supporting the existence of any of the ΛCDM model’s defining features. The Cosmos we directly observe does not contain any of the following elements:

  • A singularity
  • A Big Bang event
  • An Inflation event
  • Expanding spacetime
  • Dark Matter
  • Dark Energy

Taken together, those are the defining elements of the ΛCDM model. None of them appear in the Cosmos we observe. There is not even a faint family resemblance between the ΛCDM model and the Cosmos we observe. Any claim to the contrary is simply a falsehood.

So how do cosmologists like Peebles wind up convincing themselves that their model universe looks like reality? Obviously empiricism has nothing to do with the matter. It is solely a matter of belief. Modern cosmology is simply a cult of belief. The belief system can be reduced to the following propositions:

  • Mathematics underlies and determines the nature of physical reality. (Mathematicism)
  • The assumptions of the FLRW model that underlies ΛCDM are axiomatically true and therefore the expanding universe of ΛCDM is axiomatically true.
  • Any physical elements that ΛCDM requires physical reality to possess in order to reconcile the model with observations must exist because the model is correct.

To be fair to Peebles here, he does admit that “... the extreme simplicity of the dark sector of the standard ΛCDM cosmology seems unlikely to be better than a crude approximation to reality…“, but that’s a pretty tepid comment considering the “dark sector” (dark matter & dark energy) of the model constitutes 95% of the model’s matter-energy content, while comprising 0% of empirical reality’s matter-energy content. It’s like saying Ptolemy’s epicycles are a crude approximation of physical reality. They are, but that is beside the point.

Both crude approximations are necessary because the underlying models (geocentrism, the expanding universe) are inaccurate representations of physical reality. The crude approximations are necessary because the foundational assumptions of both models are fundamentally wrong.

The standard model of cosmology is a crude approximation of the Cosmos in the same way that Ptolemy’s geocentric cosmology was. The Cosmos is not an “expanding universe” and the Earth is not at its center. It does not matter that cosmologists choose to believe the former but reject the latter. There is no direct empirical evidence to support their belief and the model based on it is palpably nonsensical, being entirely composed of elements (both entities and events) that do not exist in physical reality.

Science is not supposed to be the study of belief systems, it is by definition restricted to the study of physical reality. Modern cosmology as currently practiced is a belief system, not a science. Physical reality bears no resemblance to the standard model of cosmology and vice versa. ΛCDM is an abject scientific failure and it desperately needs to be relegated to the dust bin of history if cosmology is to ever become a real science rather than a playground for self-deluded mathematicists.

Light Cone Confusion In The Here And Now

The light cone graphic below is taken from a Wiki article. The discussion therein gets the basics right, at least with regards to where the concept of a light cone comes from and the dimensional issues with the illustration.

… a light cone (or “null cone”) is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime. (…)

In reality, there are three space dimensions, so the light would actually form an expanding or contracting sphere in three-dimensional (3D) space rather than a circle in 2D, and the light cone would actually be a four-dimensional version of a cone whose cross-sections form 3D spheres (analogous to a normal three-dimensional cone whose cross-sections form 2D circles)

https://en.wikipedia.org/wiki/Light_cone

That’s fine as far as it goes – with two caveats. First of all, the spacetime term should be understood as referring to a relational concept of space and time, not to Wheeler’s causally interacting spacetime. Secondly, contracting spheres of light do not exist in physical reality. Much of the rest of the article is gibberish well encapsulated by the labeling of the illustration which basically renders the image incoherent.

Observer should be labeled Galaxy. A galaxy (TSV) on cosmological scales emits ESWs of light and absorbs ElectroMagnetic Radiation from all remote sources that can reach it.

Future LC = Diverging Light Cone -essentially a projection of an ESW emitted by a galaxy.

Past LC= Converging Light Cone – the aggregate of all the incoming EMR that can reach a galaxy from remote sources.

Hypersphere Of The Present is an imaginary mathematical construct that does not exist in physical reality.

For starters, the image has an observer at the shared apex of the two cones but an observer is not mentioned in the text of the Wiki article. In terms of physical reality an observer is at the apex of a “past light cone” – the observer observes light emitted from distant sources, usually omnidirectional emitters like stars and galaxies.

The “past light cone” is the aggregate of all the inbound radiation from those distant sources onto the observer. Rather than calling it a “past light cone” it would be more accurate to label it a Converging Light Cone, with the understanding that the light cone is a relative, point-of-view phenomenon that has no physical relevance except with respect to the observer.

The “future light cone” does not have an observer at its apex, it has an omnidirectional emitter such as a star or galaxy there. The “future light cone” is an aggregate of the successive expanding spherical wavefronts of electromagnetic radiation emitted by the emitter. The “future light cone” should be more accurately labeled a Diverging Light Cone. The DLC is a physical entity, consisting of sequentially emitted expanding spherical wavefronts of electromagnetic radiation. That understanding flows from Maxwell and Einstein – it is standard physics

Borrowing from radio terminology the emitter/observer can be thought of as a transmitter/receiver or transceiver (TSV). The term transceiver will also be used for an observer-only by considering a non-transmitting observer (such as a human) to be a subcomponent of a transceiver such as a star or galaxy system. With respect to the space and time (relational) labels of the illustration, the apex can be labeled “Here and Now”. So the apex represents the HAN of a TSV.

The rest of the labeling is adequate with the relational nature of space and time caveat being understood. What the illustration then presents us with is a stark refutation of the modern conception of the Cosmos as a simultaneously existing Universe. The TSV (galaxy) is always and only at some unique spatio-temporal location.

The TSV is at the center of the omnidirectionally expanding spherical wavefronts of electromagnetic radiation that it emits – the Diverging Light Cone. A TSV is also at the center of all the electromagnetic radiation that is arriving at its particular place and time from all directions – the Converging Light Cone.

The following statement applies to every possible TSV – everywhere and everywhen. Every TSV is at the center of its own unique “universe” which is just its own unique view of a Cosmos that cannot be simultaneously accessed from any three dimensional HAN.

No TSV can detect the state of a remote TSV that is simultaneous with its own HAN. The finite speed of light prohibits any and all such knowledge. The nearest galaxy to our own, Andromeda, is 2.5 million lightyears distant. We see it in our frame as it existed 2.5 million years ago. We do not have and cannot have any knowledge of its “current” state. Andromeda’s “current” state is not part of the Cosmos we have access to. Andromeda’s HAN does not exist in our unique cosmological frame – Andromeda is always There and Then (TAT) in our cosmological frame.

The two dimensional projection labeled the Hypersurface Of The Present illustrates this clearly. The HAN of any TSV is always and only a local state. All other spatio-temporal locations lie outward – TAT- along the surface of the Converging Light Cone. No TSV has access to the HOTP and in fact the HOTP is only a mathematical/metaphysical construct that has no physical correlate. The HOTP does not exist in physical reality because it represents a universal simultaneity which cannot exist because lightspeed has a finite maximum. There is no physical meaning to the concept of a “universal now” – that is the reason there is no universal frame or “now” in General Relativity.

The apex point represents the only HAN available to any TSV. All remote objects exist only in the transceiver’s past – on the TAT of the Converging Light Cone.

Unfortunately, modern cosmologists are of the opinion that they do have knowledge of this simultaneous something (the HOTP) that does not have any existence in physical reality. That is what the term Universe refers to as employed by cosmologists. They believe themselves to be in possession of knowledge of this imaginary, simultaneously existing Universe that, by the known laws of physics, cannot exist. That 13.8 billion year old entity does not exist by normal scientific standards – it is not an observable.

What modern cosmologists have, of course, is just a mathematical model based on some simplifying assumptions adopted @ 100 years ago at a time when the known Cosmos barely extended beyond our own galaxy. One of the model’s assumptions is that the Cosmos has a “universal” spacetime frame (the FLRW metric) even though, in the context of General Relativity, no universal frame exists. A universal spacetime metric inherently includes a universal time with a universal now. Despite the incongruency, the FLRW metric was applied to the GR field equations. The result of this misbegotten effort speaks for itself:

The Standard Model of Cosmology is a miserable failure; it describes a Universe that looks nothing like the Cosmos we observe. To the extent that it can be said to agree with actual observations, it only arrives at such agreements by insisting that physical reality contains entities and events that physical reality, by all direct empirical evidence, does not appear to contain.

The SMC is junk science or perhaps more accurately, it is a mathematicist confabulation presented as science by people who don’t understand basic physics – that the speed of light in the Cosmos has a finite maximum of @3×108 meters/second. It’s not that they don’t know that fact, they do, but rather they don’t understand what it means in the context of the vast Cosmos we observe. They only know what the SMC tells them and that model, they believe, can’t be wrong because if it were smart people like them wouldn’t believe in it.

In fact though, we have no scientific reason to think that the limited view of the Cosmos we have provides us with knowledge of an unobservable, simultaneously-existing, and expanding Universe. The consensus belief of cosmologists that they have such knowledge can be attributed to the fever dream of mathematicism that deeply infects the theoretical physics community. Modern cosmology is a mess.

Science is not perfect. Mistakes are to be expected in science. The Standard Model of Cosmology is a mistake. The model’s foundational assumption of an “expanding universe” is a mistake. It is a mistake in the same way that geocentrism was a mistake. It is fundamentally wrong about the nature of the Cosmos, It is time to move on from the expanding universe model. I’ll give the last word to the astrophysicist Pavel Kroupa:

Thus, rather than discarding the standard cosmological model, our scientific establishment is digging itself ever deeper into the speculative fantasy realm, losing sight of and also grasp of reality in what appears to be a maelstrom of insanity.

https://iai.tv/articles/our-model-of-the-universe-has-been-falsified-auid-2393

10May24 Acknowledgement: My original concept for the apex of a light cone was that it should be labeled “Here”. In an exchange with the mathematician Robert A. Wilson he made the invaluable suggestion that the apex be called “Here and Now”.

The Radiation Density Gradient II

I want to correct and extend a point raised in the thought experiment of the previous post. Under consideration, a blackbody the size and density of the sun placed in the InterGalactic Medium. Such a body should come to equilibrium with the Ambient Cosmic Electromagnetic Radiation. The ACER has never been properly accounted for in its entirety, though a preliminary effort has been made,

The blackbody, being in equilibrium, is emitting as much energy as it is receiving and it therefore has an energy density gradient surrounding it consisting of the outbound radiation which drops off in density as 1/r2, the radial distance from the center of the blackbody. The underlying ACER density (presumed approximately constant) does not change with distance and may well be considered an inertial field.

Now we flip the fusion switch and make the blackbody a more realistic astronomical object, a star. Compared to the blackbody, this star has a relatively enormous radiation density gradient consisting of all the omnidirectionally emitted radiation produced by the star. The density of that radiation will again drop off as 1/r2.

When a remotely sourced electromagnetic wave passes close by a star the wave is observed to curve as if it were traversing a medium with a density gradient. This is commonly attributed to a gravitational field though such a field has never been observed.

What is observed are a curvature of light and a radiation density gradient. It strains credulity to believe that those two physical facts are uncorrelated. This in turn suggests that observed gravitational effects, commonly attributed to an unobserved gravitational field, are in fact a consequence of matter-electromagnetic energy interactions.

The Radiation Density Gradient

A thought experiment: Imagine a spherical blackbody the size and density of the sun. Now place that body deep in the InterGalactic Medium. In such a location a blackbody will be absorbing all of the Ambient Cosmic Electromagnetic Radiation that flows continuously through any location in the IGM. Eventually we should expect the sphere to settle into a state of equilibrium with the ACER, continuously absorbing and reemitting all the energy it receives.

This means that there must exist a Radiation Density Gradient surrounding the blackbody consisting of all the inbound and outbound radiation. By normal geometric considerations we should expect the gradient to drop off as 1/R2, R being the radius.

The question is, just how much energy is that? The answer depends on the surface area of the sphere, which is easy to calculate, and the energy density of the ACER which doesn’t seem to be cleanly known, Various frequency bands have been observed to varying degrees. This recent survey of the Cosmic spectrum as a whole suggests that the aggregate nature, behavior and effect of the ACER has not been properly and fully assessed.

Given some reasonable value for the ACER density it would be possible to estimate the RDG of the sphere. With a reasonable value for the RDG a calculation could then be made, using optical considerations, of the curvature to be expected for the path of a light ray passing through the RDG, feeding that curved light back into the RDG calculation. The result could then be compared to the standard curvature predicted by General Relativity but first the additional energy directly radiated by the sun would have to be factored in. Something to think about. Somebody else will have to do the math though. That’s not my gig.

Bohmian Mechanics & A Mathematicism Rant

26JUN22 Posted as a comment to this Quanta article. Probably won’t be published there until tomorrow.

Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement.

This is just the usual Copenhagen mush, that substitutes a feckless by-product of the pseudo-philosophy of mathematicism for scientific rigor. The rather irrational, anthropocentric view that a particle doesn’t have a position until it is measured is entirely dependent on the belief that the Schrodinger wavefunction is a complete and sufficient description of physical reality at the quantum scale. It is not.

The Schrodinger equation only provides a statistical distribution of the possible outcomes of a measurement without specifying the underlying physical processes that cause the observed outcomes. The only scientifically reasonable conclusion would seem to be that the equation constitutes an incomplete description of physical reality. The Copenhagen interpretation of QM – that the wavefunction is all that can be known is not a rational scientific viewpoint. In physical reality things exist whether humans observe them or not, or have models of them or not.

There has long been a known alternative to the wavefunction only version of QM that was championed by John Bell himself. In Bohmian mechanics, in addition to the wavefunction, there is a particle and and a guiding wave. In that context the wavefunction provides the outcome probabilities for the particle/guiding-wave interactions. Bohmian mechanics constitutes a sound scientific hypothesis; the Copenhagen interpretation (however defined) offers nothing but incoherent metaphysical posturing. As in:

The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ion’s measurement.

So what is that supposed to mean exactly, in physical terms? The measurement of one ion doesn’t affect the second ion but it does alter the outcome of the second ion’s measurement? But what is being measured if not the state of the second ion? The measurement of the second ion has changed but the ion itself is unaltered, because the “context” changed? What does that mean in terms of physics? Did the measurement apparatus change but not the second particle? It’s all incoherent and illogical, which is consistent with the Copenhagen interpretation I suppose, but that’s not saying much.

Bohmian mechanics makes short work of this matter. There are two charged particles, each with a guiding wave; those guiding waves interact in typical wave-like fashion in the 4 dimensional frame of electromagnetic radiation. The charged particles are connected in, by, and across that 4D frame. By common understanding, such a 4D frame has no time dimension. That is what accounts for the seemingly instantaneous behavior. That’s physics, it could be wrong, but it is physics. Contextuality is math; math is not physics.

Theoretical physics in its current guise is an unscientific mess because physical reality has been subordinated to mathematical and metaphysical considerations. And so we have the ongoing crises in physics in which the standard models are so discordant with physical reality in so many different ways that it seems difficult to say what precisely is wrong.

The simple answer is that you can’t do science that way. Attempting to build physics models outward from the mathematical and metaphysical realms of the human imagination is wrong. Basing those models on 100 year old assumptions and holding those outdated assumptions functionally inviolable is wrong.

Science has to be rooted in observation (direct detection) and measurement. Mathematics is an essential modeling tool of science but it is only a tool; it is not, of itself, science.

Theoretical physics, over the course of the 20th century, devolved into the study of ever more elaborate mathematical models, invoking ever more elaborate metaphysical conjectures of an invisible realm knowable only through the distorted lenses of those standard models.

Physical reality has to determine the structure of our theoretical models. Currently theorists with elaborate models dictate to scientists (those who observe, measure, and experiment) that they must search for the theoretical components of their models, or at least some signs and portents thereof. Failure to find the required entities and events does not constitute a failure of the model however, only a failure, thus far, of detection (dark matter, etc.) or the impossibility of detection (quarks, etc.). In any event the models cannot be falsified; they need only be modified.

Modern theoretical physics is an exercise in mathematicism and it has come to a dead end. That is the crisis in physics and it will continue until the math first approach is abandoned. It is anybody’s guess when that will happen. The last dark age of western science persisted for a millennium.

The Mathematicist’s Tale

19Jun22 The following was posted as a comment to this Medium post by Eric Siegel. It has been slightly edited.

This is a good overview of the illogical mathematicism at the root of the inane Big Bang model. The Friedmann equation is indeed the headwaters of all the nonsense that currently engulfs and overwhelms any possibility of a meaningful scientific account of the Cosmos.

The Friedmann equation rests not only on the two simplifying assumptions of mathematical convenience, isotropy and homogeneity, the author cites. There was also an unstated but implicit assumption that the Cosmos could be treated as a unified, coherent, simultaneous entity, a Universe.

Further it was assumed that the field equations of General Relativity, derived in the context of the solar system, could be stretched to encompass this imaginary Universe. The results speak for themselves.

The standard model of cosmology is an incoherent, unscientific mess. It claims that once upon a time 13.8 billion years ago the entirety of the Cosmos was compressed into a volume smaller than a gnat’s ass that began expanding for an inexplicable reason, then accelerated greatly due to the fortuitous intervention of an invisible inflaton field, which set invisible spacetime expanding at just the right rate to arrive at the current state of the Universe, which we are told is 95% composed of some invisible matter and energy that has no other purpose than to make the model agree with observations; the remaining 5% of this Universe is the stuff we actually observe.

To be clear, all of the entities and events the standard model describes are not part of the Cosmos we actually observe except for that rather trivial 5% of the BB Universe. That 5% of the model is the Cosmos we observe.

Science is the study of those things that can be observed and measured. At some point in the mid 20th century theoretical physicists adopted the conceit that science was the study of mathematical models. This categorical error was compounded by treating scientific hypotheses such as “universal expansion” as axioms (true by definition). In science, all truths are provisional; nothing can be held true by definition.

Axioms belong to the mathematical domain. Math is not science and the mathematicist belief that math underlies and determines the nature of physical reality has no scientific basis. That the Cosmos is a unified, coherent and simultaneously existing entity can only be true if the Cosmos had a miraculous simultaneous origin – the Big Bang,

The problem with miracles is that they are not scientific in nature; they cannot be studied only believed in. Believing in the Big Bang and weaving a fantastical account of an imaginary Universe is a mathematical/metaphysical endeavor that has nothing to do with science or physical reality.

Imaginary Universe? That’s right, the Universe of the Big Bang has no basis in any known physics. In the physics we know, there is a finite, maximum limit to the speed of electromagnetic radiation – 3×10^8 meters per second (186,000 miles per hour). It is via electromagnetic radiation that we observe the Cosmos.

The galaxy nearest to our own is Andromeda; it is detectable by the unaided eye as a fuzzy patch in the night sky. Andromeda is @ 2.5 million light years from Earth. Everything we know about Andromeda is 2.5 million years out of date, of its current state we do not have and cannot have any direct knowledge.

At the far extent of our observational range we observe galaxies that are @10 billion light years away. We do not have and cannot have any knowledge of the current state of any of those galaxies. The same argument holds for all the other galaxies we observe.

Galaxies are the primary constituents of the Cosmos we observe. It follows, therefore, that we do not and cannot have knowledge of the Cosmos’ current state. The very idea of the Cosmos having a current state is scientifically meaningless. Mathematicists believe otherwise:

If you want to understand the Universe, cosmologically, you just can’t do it without the Friedmann equation. With it, the cosmos is yours.


See that’s all you need, a simple mathematical model derived 100 years ago by “solving” the field equations of General Relativity (which does not have a universal frame) for a simplistic toy model of the Cosmos (that does have a universal frame). Suddenly you can know everything about everything, even things you can’t possibly observe or measure. That’s mathematicism hard at work creating an imaginary Universe for mathematicists to believe in. The standard model of cosmology has no scientific basis; it is a nerd fantasy and it is scientifically useless.

The Sorry State Of Particle Physics

This paper had its 15 minutes of fame (or infamy) a few weeks back. It’s quite the piece of work, claiming as it does to have made a high precision measurement of the mass of a W particle. The authors (there appear to be more authors than sentences in the paper) fail to explain, of course, how it is possible to measure the mass of a particle that has never been observed.

No explanation is necessary, of course, if you are a member of the peculiar cult of mathematicism that believes modern mathematical models determine the nature of physical reality. If you are of the more realistic opinion that physical reality should determine the nature of our mathematical models you will be considered hopelessly naive by the cult and quite possibly a crackpot. That the cult of mathematicism has made an incoherent and absurd mess of theoretical physics is as factually undeniable as it is flamboyantly denied.

For the record, there has never been a direct detection of a W particle. According to the standard model of particle physics (SM) the half-life of the W boson is 3×10-25s, a time increment so small as to render the W particle’s existence (in physical reality) indistinguishable from its non-existence. In the fantasy realm of the mathematicist this is not a problem; the W particle exists in the model and therefore it must exist in physical reality even if it is undetectable.

So how then can the authors claim to have made a high precision measurement of an invisible particle? The claim is hyperbole; no high precision measurement of a nonexistent W boson has been made. What has transpired is simply a lot of data massaging of pre-exiting results of collider experiments run between 2002-2011 in which the actual observations of quotidian particles like electrons and muons are treated as by-products of the decay of the unobservable W within the context of the SM that assumes the W’s existence.

If you wade through the paper’s turgid account of all the data massaging that went into this bogus claim of a precision measurement you will encounter a single sentence that gives the game away:

The W boson mass is inferred from the kinematic distributions of the decay leptons.

And that is the bottom line here, a W boson mass inferred from the observation of electrons and muons (leptons) in the context of a model that assumes (against the evidence) the W boson’s existence, is not in any meaningful, scientific sense a high precision measurement. The claim that it is is simply, in the technical sense, high order bullshit.

Mass and the equivalence principle(s)

Robert A. Wilson commented on something I posted over at Triton Station:

Certainly, one can hardly argue with the principle of general relativity as a fundamental physical principle. The various forms of the equivalence principle, on the other hand, presume that we already know what mass is – which we clearly don’t.

I’ve invited Robert to elaborate a bit here. So Robert, I agree with you on the principle of general relativity but could you explain what you mean when you say we don’t know what mass is and how that relates to the equivalence principle?

Another Day, Another Anti-Universe Rant

I may or may not have posted this as a comment to a Quanta article. It has been modified. 25Feb2022

The belief in a wholly imaginary “universe” lies at heart of modern cosmology’s ludicrous, absurd and irrational standard model. The assumption of a universal metric with homogenous and isotropic contents underlies the FLRW equations that form the basis of all modern cosmological models.

Applying the imaginary universal metric to the field equations of General Relativity was an oxymoronic exercise that produced a ridiculous cosmological model, the Big Bang, that looks nothing like the Cosmos we actually observe. The observed Cosmos does not contain a Big Bang event, inflation, expanding spacetime, dark matter, or dark energy.

Those entities and events are part of the BB model but they are not part of empirical reality, which is to say they are not part of scientific reality. The entirety of the BB model is theoretical nonsense unhinged from the physical reality that is the only proper realm of scientific study.

Alexander Friedmann’s simplistic “homogenous and isotropic universe” assumption, made at a time when the known Cosmos consisted of our galaxy, was simply wrong. The vast Cosmos that falls within our observational range is neither homogenous nor isotropic and it makes no sense, no physical sense, to imagine that vast Cosmos constitutes a unified, coherent, and simultaneous entity.

The vast Cosmos, of which we will always have only a partial view constrained by the finite limit of light speed and the cosmological redshift, cannot be treated as a simplistic unitary entity capable of being modeled by our limited mathematical scribblings. In a very real physically meaningful way, the “Universe” of the Big Bang model doesn’t exist. It is a wholly imaginary entity.

To sustain a belief in their model of the “expanding Universe” modern cosmologists have to ignore the things that are there while believing in model-dependent imaginary things that aren’t there. Cosmology has devolved into a cult devoted to the care, maintenance, and defense of the ridiculously unscientific Big Bang model.

Modern cosmology is a mess and will remain so until the “expanding Universe” paradigm is consigned to the dustbin of history alongside Ptolemy’s geocentric model. It is time in other words to drag cosmology away from its cult-like fixation on the Big Bang and open the field to the study of cosmological models that are not dependent on the failed “expanding Universe” assumption, but are mathematically constructed to reflect the Cosmos we actually observe and measure. Humankind deserves a realistic cosmology; the dim, misbegotten fantasy we are currently saddled with doesn’t cut it.

Why The Cosmos Is Not A Universe

One of the fundamental assumptions underlying the standard model of cosmology, commonly known as the Big Bang, is that the Cosmos comprises a unified, coherent and simultaneous entity – a Universe. It is further assumed that this “Universe” can be mathematically approximated under that unitary assumption using a gravitational model, General Relativity, that was devised in the context of our solar system. At the time that assumption was first adopted the known scale of the Cosmos was that of the Milky Way, our home galaxy, which is orders of magnitude larger and more complex than the solar system.

Subsequently as the observed Cosmos extended out to the current 13 billion light year range, it has become clear that the Cosmos is orders of magnitude larger and more complex than our galaxy. The resulting Big Bang model has become, as a consequence, absurd in its depiction of a cosmogenesis and ludicrous in its depiction of the “current state of the “Universe“, as the model attempts to reconcile itself with the observed facts of existence.

It will be argued here that the unitary conception of the Cosmos was at its inception and is now, as illogical as it is incorrect.

I Relativity Theory

The unitary assumption was first adopted by the mathematician Alexander Friedmann a century ago as a simplification employed to solve the field equations of General Relativity. It imposed a “universal” metric or frame on the Cosmos. This was an illogical or oxymoronic exercise because a universal frame does not exist in the context of Relativity Theory. GR is a relativistic theory because it does not have a universal frame.

There is, of course, an official justification for invoking a universal frame in a relativistic context. Here it is from a recent Sabine Hossenfelder video:

In general relativity, matter, or all kinds of energy really, affect the geometry of space and time. And so, in the presence of matter the universe indeed gets a preferred direction of expansion. And you can be in rest with the universe. This state of rest is usually called the “co-moving frame”, so that’s the reference frame that moves with the universe. This doesn’t disagree with Einstein at all.

The logic here is strained to the point of meaninglessness; it is another example of the tendency of mathematicists to engage in circular reasoning. First we assume a Universe then we assert that the universal frame which follows of necessity must exist and therefore the unitary assumption is correct!

This universal frame of the BB is said to be co-moving and therefore everything is supposedly OK with Einstein (meaning General Relativity, I guess) too, despite the fact that Einstein would not have agreed with Hossenfelder’s first sentence; he did not believe that General Relativity geometrized gravity, nor did he believe in a causally interacting spacetime. The universal frame of the BB model is indeed co-moving in the sense that it is expanding universally (in the model). That doesn’t make it a non-universal frame, just an expanding one. GR does not have a universal frame, co-moving or not.

Slapping a non-relativistic frame on GR was fundamentally illogical, akin to shoving a square peg into a round hole and insisting the fit is perfect. The result though speaks for itself. The Big Bang model is ludicrous and absurd because the unitary assumption is wrong.

II The Speed of Light

The speed of light in the Cosmos has a theoretical maximum limit of approximately 3×108 meters per second in inertial and near-initial conditions. The nearest star to our own is approximately 4 light years away, the nearest galaxy is 2.5 million LY away. The furthest observed galaxy is 13.4 billion LY.* This means that our current information about Proxima Centauri is 4 years out of date, for Andromeda it is 2.5 million years out of date, and for the most distant galaxy 13.4 billion years out of date.

Despite this hard limit to our information about the Cosmos, the BB model leads cosmologists to perceive themselves capable of making grandiose claims, unverifiable of course, about the current state of the Cosmos they like to call a Universe. In the BB model’s Universe it is easy to speak of the model Universe’s current state but this is just another example of the way the BB model does not accurately depict the Cosmos we observe.

In reality we cannot have and therefore, do not have, information about a wholly imaginary global condition of the Cosmos. Indeed, the Cosmos cannot contain such knowledge.

Modern cosmologists are given to believing they can have knowledge of things that have no physical meaning because they believe their mathematical model is capable of knowing unknowable things. That belief is characteristic of the deluded pseudo-philosophy known as mathematicism; it is a fundamentally unscientific conceit.

III The Cosmological Redshift

The second foundational assumption of the BB model, is that the observed redshift-distance relationship found (or at least confirmed) by the astronomer Edwin Hubble in the late 1920s was caused by some form of recessional velocity. Indeed, it is commonly stated that Hubble discovered that the Universe is expanding. It is a matter of historical record, however, that Hubble himself did not ever completely agree with that view:

To the very end of his writings he maintained this position, favouring (or at the very least keeping open) the model where no true expansion exists, and therefore that the redshift “represents a hitherto unrecognized principle of nature”.

Allan Sandage

However, for the purposes of this discussion, the specific cause of the cosmological redshift does not matter. The redshift-distance relation implies that if the Cosmos is of sufficient size there will always be galaxies sufficiently distant that they will lie beyond the observational range of any local observer. Light from those most distant sources will be extinguished by the redshift before reaching any observer beyond the redshift limited range of the source. Even in the context of the BB, it is generally acknowledged that the field of galaxies extends beyond the potentially observable range.

The extent of the Cosmos, therefore, is currently unknown and, to three dimensionally localized observers such as ourselves, inherently unknowable. A model, such as the BB, that purports to encompass the unknowable is fundamentally unscientific; it is, by its very nature, only a metaphysical construct unrelated to empirical reality.

IV The Cosmos – A Relativistic POV

Given the foregoing considerations it would seem reasonable to dismiss the unitary conception of the Cosmos underlying the Big Bang model. The question then arises, how should we think of the Cosmos?

Unlike scientists of 100 years ago when it was an open debate whether or not the nearby galaxies were a part of our galaxy, modern cosmologists have a wealth of data stretching out now to 13 billion light years. The quality and depth of that data falls off the farther out we observe however.

The Cosmos looks more or less the same in all directions, but it appears to be neither homogenous nor isotropic nor of a determinable, finite size. That is our view of the Cosmos from here on Earth; it is our point of view.

This geocentric POV is uniquely our own and determines our unique view of the Cosmos; it is a view that belongs solely to us. Observers similarly located in a galaxy 5 billion light years distant from us would see a similar but different Cosmos. Assuming similar technology, looking in a direction opposite the Milky Way the distant observers would find in their cosmological POV a vast number of galaxies that lie outside our own POV. In looking back in our direction, the distant observer would not see many of the galaxies that lie within our cosmological POV.

It is only in this sense of our geocentric POV that we can speak of our universe. The contents of our universe do not comprise a physically unified, coherent and simultaneously existing physical entity. The contents of our universe in their entirety are unified only by our locally-based observations of those contents.

The individual galactic components of our universe each lie at the center of their own local POV universe. Nearby galaxies would have POV universes that have a large overlap with our own. The universes of the most distant observable galaxies would overlap less than half of our universe. Those observers most distant and in opposite directions in our universe would not exist in each others POV.

So what then can we say of the Cosmos? Essentially it is a human conceptual aggregation of all the non-simultaneously reviewable matter-energy systems we call galaxies. The field of galaxies extends omni-directionally beyond the range of observation for any given three dimensionally localized observer and the Cosmos is therefore neither simultaneously accessible nor knowable. The Cosmos does not have a universal frame or a universal clock ticking. As for all 3D observers, the Cosmos tails away from us into a fog of mystery, uncontained by space, time, or the human imagination. We are of the Cosmos but can not know it in totality because it does not exist on those terms.

To those who might find this cosmological POV existentially unsettling it can only be said that human philosophical musings are irrelevant to physical reality; the Cosmos contains us, we do not contain the Cosmos. This is what the Cosmos looks like when the theoretical blinders of the Big Bang model are removed and we adopt the scientific method of studying the things observed in the context of the knowledge of empirical reality that has already been painstakingly bootstrapped over the centuries by following just that method.

___________________________________________________

* In the funhouse mirror of the Big Bang belief system, this 13.4 GLY distance isn’t really a distance, it’s the time/distance the light traveled to reach us. At the time it was emitted according to the BB model the galaxy was only 2.6 GLY distant but is “now” 32 GLY away. This “now“, of course, implies a “universal simultaneity” which Relativity Theory prohibits. In the non-expanding Cosmos we actually inhabit, the 13.4 GLY is where GN-z11 was when the light was emitted (if our understanding of the redshift-distance relation holds at that scale.) Where it is “now” is not a scientifically meaningful question because it is not an observable and there is. in the context of GR, no scientific meaning to the concept of a “now” that encompasses ourselves and such a widely separated object.

Two Big Lies

This fundamental idea — that matter and energy tells spacetime how to curve, and that curved spacetime, in turn, tells matter and energy how to move — represented a revolutionary new view of the universe. Put forth in 1915 by Einstein and validated four years later during a total solar eclipse — when the bending of starlight coming from light sources behind the sun agreed with Einstein’s predictions and not Newton’s — general relativity has passed every observational and experimental test we have ever concocted.

How to understand Einstein’s equation for General Relativity

Sooner or later it seems Ethan Siegel will trot out every disingenuous argument employed by the Big Bang cult in support of its peculiarly unscientific belief system. Two big lies about General Relativity popular among the faithful are succinctly presented in the above quote. The first is that Einstein “put forth” the idea of that GR reduced gravity to the geometrization of a substantival spacetime.

Einstein opposed that view throughout the years subsequent to GR’s introduction, whenever it was proposed. That is a matter of historical record. The formulation that Siegel presents here is a paraphrasing of John A. Wheeler’s well known assertion. That assertion directly contradicts Einstein’s clearly and repeatedly stated position on the matter.

The second big lie is that GR “has passed every observational and experimental test…“. That is true only of tests performed on the scale of the solar system. GR does not pass tests on galactic and cosmological scales without the ad hoc addition of dark matter and dark energy. The existence of neither of those hypothetical entities is supported by direct empirical evidence; the only support they can realistically be said to have is that they reconcile GR with observations. Siegel knows this, he just chooses not to mention it. That is what is known as a lie of omission:

Lying by omission, also known as a continuing misrepresentation or quote mining, occurs when an important fact is left out in order to foster a misconception… An omission is when a person tells most of the truth, but leaves out a few key facts that therefore, completely obscures the truth.

https://en.wikipedia.org/wiki/Lie#Types_and_associated_terms