Category Archives: Theoretical Physics

Denial Of The Deluded

The New York Times has a recent guest article entitled The Story of Our Universe May Be Starting to Unravel. It is in some ways good to see doubts about the Standard Model of Cosmology surfacing in the mainstream press. What the authors, an astrophysicist and a theoretical physicist, have on offer though is some weak tea and a healthy dose of the usual exculpatory circular reasoning.

The authors do point out some of the gaping holes in the SMoC’s account of the Cosmos:

  • normal” matter — the stuff that makes up people and planets and everything else we can see — constitutes only about 4 percent of the universe. The rest is invisible stuff called dark matter and dark energy (roughly 27 percent and 68 percent).
  • Cosmic inflation is an example of yet another exotic adjustment made to the standard model. Devised in 1981 to resolve paradoxes arising from an older version of the Big Bang, the theory holds that the early universe expanded exponentially fast for a fraction of a second after the Big Bang

That’s a start I guess but then we get this absurd rationalization for simply accepting the invisible and entirely ad hoc components of the SMoC:

There is nothing inherently fishy about these features of the standard model. Scientists often discover good indirect evidence for things that we cannot see, such as the hyperdense singularities inside a black hole.

Let’s be clear here about this so-called “indirect evidence“; all of it essentially boils down to model dependent inference. Which is to say, you cannot see any evidence for these invisible and/or impossible (singularities) things unless you peer through the distorting lenses of the simplistic mathematical models beloved of modern theoretical physicists. People who believe that mathematical models determine the nature of physical reality are not scientists, they are mathematicists and they are deluded – they believe in things that, all the evidence says, are not there.

Not only are mathematicists not scientists, they are not good mathematicians either. If they were good at math and found that one of their models was discordant with physical observations they would correct the math to reflect observations. What mathematicists do is correct reality to fit their math. That is where the dark sector (dark matter & dark energy) come from – they added invisible stuff to reality to make it fit their broken model.

A mathematician did come up with a correction to Newtonian dynamics that had been inaccurately predicting the rotation curves of disk galaxies. Mordehai Milgrom developed MOND (Modified Newtonian Dynamics) in the 1980s and it was quite successful in predicting galactic disk dynamics.

Unfortunately the mathematicists had already off-loaded their problem onto reality by positing the existence of some unseen dark matter. All you have to know about the state of modern theoretical physics is that after 40 years of relentless searching and failure to discover any empirical evidence there remains a well-funded Dark Matter cottage industry, hard at work seeking evidence for the non-existent. This continuing search for that which is not there represents a betrayal of science.

It might appear that the authors here are not mathematicists given that they seem to be suggesting that the SMoC is not sacrosanct and needs to be reconsidered in its entirety:

We may be at a point where we need a radical departure from the standard model, one that may even require us to change how we think of the elemental components of the universe, possibly even the nature of space and time.

Sounds promising but alas, the reconsideration is not to be of the foundational assumptions of the model itself but only certain peripheral aspects that rest on those assumptions such as “…the assumption that scientific laws don’t change over time.” Or they suggest giving consideration to to this loopy conjecture: “…every act of observation influences the future and even the past history of the universe.

What the authors clearly do not wish to reconsider is the model’s underlying concept of an Expanding Universe. That assumption – and it is only an assumption of the model – was adopted 100 years ago at a time when it was still being debated whether the galaxies we observed were a part of, or separate from, the Milky Way. It was, in other words, an assumption made in ignorance of the nature and extent of the Cosmos as we now observe it. The authors treat the Expanding Universe concept as though it had been handed down on stone tablets by some God of Mathematicism:

A potent mix of hard-won data and rarefied abstract mathematical physics, the standard model of cosmology is rightfully understood as a triumph of human ingenuity. It has its origins in Edwin Hubble’s discovery in the 1920s that the universe was expanding — the first piece of evidence for the Big Bang. Then, in 1964, radio astronomers discovered the so-called Cosmic Microwave Background, the “fossil” radiation reaching us from shortly after the universe began expanding.

For the record, Edwin Hubble discovered a correlation between the redshift of light from a galaxy and its distance. That is all he discovered. It is an assumption of the model that the redshift is caused by some form of recessional velocity. It is also an assumption of the abstract mathematical physics known as the FLRW equations that the Cosmos is a unified, coherent, and simultaneously existing entity that has a homogenous and isotropic matter-energy distribution. Both of those assumptions have been falsified by observations and by known physics.

Also for the record it should be noted that prior to the discovery of the Cosmic Microwave Background Radiation predictions by Big Bang cosmologists ranged over an order of magnitude that did not encompass the observed 2.7K value. At the same time scientists using thermodynamic considerations made more accurate predictions.

The belief in an Expanding Universe has no scientific basis. It is a mathematicist fantasy, and until that belief is set aside, the Standard Model of Cosmology will remain a crappy, deluded fairy tale that does not in any objective way resemble the magnificent Cosmos we observe.

Spherical Wavefronts & Cosmological Reality

Expanding Spherical Wavefronts are standard physics:

Credit: Gong Gu, https://fr.slideserve.com/oster/ece341-electromagnetic-fields-powerpoint-ppt-presentation

The Expanding Spherical Wavefronts depicted above are physical entities. They illustrate the behavior of light as emitted omnidirectionally by a “point source” emitter. Point source, as always in physics, is a mathematical term of art, there being no physical meaning to the mathematical concept of a dimensionless “point”. A source can be treated mathematically as point-like, however, for an observer distant enough that the emitting source’s dimensions are small relative to the radial distance to the observer.

In the foregoing sense, a typical galaxy can be treated as a point source at large ( >100 million lightyears) distance. The nested shells of the model can be considered as representing either successive positions of a single wavefront over time or an instantaneous representation of continuously emitted successive wavefronts from a typical, omnidirectional emitter such as a galaxy.

This nested shell model can also be invoked to illustrate a ubiquitous inverse electromagnetic phenomenon. Replacing the emitter with an observer, the nested shells can be seen as representing notional spheres existing at various, arbitrary radial distances from the observer. At any given radial distance of cosmological significance from the observer the notional shell will have on it some number of galaxies. Such a notional shell also defines a notional volume which must also contain some number of galaxies. This geometrical situation is only relevant to the observer; it is not, as in the ESW case, a physical entity.

Elaborating a bit, we can define a cosmological radial unit of r = 100 million lightyears. That radial unit then defines a sphere with a surface area proportional to r2 and a volume proportional to r3, For illustrative purposes we make a simplifying (and unrealistically low) assumption that any r3 unit volume contains on average 1000 galaxies.

Observers whose observational range extends out 1 radial unit will observe their “universe” to contain 1000 galaxies. If those same observers improve their technology so that their range of observation extends to 2 radial units they will find themselves in a “universe” that contains 8000 galaxies. If their range doubles again to 4r their “universe” will now contain 64,000 galaxies.

Every time the observational range doubles the total number of galaxies contained in the newly expanded “universe” will increase by a factor of 8 or 23. Of that 8-fold increase 7/8 of the total number of galaxies will lie in the newly observable portion of the total volume. This all follows from straightforward geometrical considerations.

Now let us return to the shell model centered on an omnidirectional emitter. The same geometric considerations apply here but this time with respect to an Expanding Spherical Wavefront. At 1r a wavefront will have encountered 1000 galaxies, at 4r, 64,000 galaxies and at 8r it will have encountered a total of 512,000 galaxies. As mentioned earlier, these numbers may be unrepresentative of the actual number of galaxies encountered, which could be considerably higher.

When an ESW encounters a galaxy some portion of that wavefront is absorbed by the galaxy representing a loss of energy by the wavefront and a corresponding gain of energy by the galaxy. This leads to two further considerations, the first related to direct observations. An ESW will lose energy as it expands in proportion to its volumetric expansion assuming a constant average galactic density . The energy loss will be be insignificant for each individual galactic encounter but the aggregate loss will increase exponentially at large radial distances. An increasing loss of energy with distance is an observed fact (cosmological redshift) for the light from galaxies at large (>100Mly) cosmological distances.

The second consideration is that in some finite time period relative to the emitter all of an ESW’s energy will be absorbed by intervening galaxies (and any other non-luminous baryonic matter). The cosmological range of an ESW is inherently limited – by standard physical considerations. In a sense, their is a notional cosmic wall, relative to the emitter, beyond which its ESWs cannot penetrate.

Reverting once again to the observer’s point of view, it follows that the observers cannot receive electromagnetic signals from sources that have reached the limits of their range – the cosmic wall discussed in the previous paragraph. It also follows directly that the observer is surrounded by a notional cosmic wall, relative only to the observer, beyond which more distant emitters cannot be observed. This wall has no physical significance except from the observer’s local Point-Of-View – it is the aggregate of all the ESW notional walls that surround the observer.

That notional wall is real however in the sense that it defines the limits of any observer’s observational range, just as the wall encountered by an ESW limits the range of its expansion. In both cases we are dealing with a relativistic POV. The ESW just before it reaches its wall encompasses an enormous cosmological volume relative to its source emitter’s location. Observers, just before encountering their notional wall, observe an enormous cosmological volume relative to their three dimensional locale.

Keeping in mind the earlier discussion of the spherical geometry of ESWs, it is interesting to consider that immediately in front of an observer’s notional wall there lies a vast volume of some nominal thickness containing galactic emitters that are still observable. The number of those emitters has increased as R3, while their cosmological redshift has increased as R3, where R is the observer’s radial distance from the remote sources. Beyond that radial distance lies the notional wall at which all observations cease. In that context the observer’s wall can be thought of as a concave, spherical shell against which all foreground observations are projected. Because of the geometrical considerations mentioned, we should expect the most distant visible galaxies to cover the notional, concave, spherical surface of the observer’s cosmological view.

What we arrive at then is a picture much like that proposed by Olber’s Paradox, the only difference being that Olber did not account for the energy loss of light so he expected that the night sky should be uniformly bright in the visible spectrum. What we observe, of course, is a night sky uniformly bright in the microwave spectrum.

The existence of the Cosmic Microwave Background is consistent with the galaxy distribution and energy loss to be expected by using the Expanding Spherical Wavefront framework of standard physics to model the Cosmos we observe. The only assumptions necessary to achieve this result are of an average galactic density on cosmological scales and that the field of galaxies extends sufficiently far for the geometrical arguments to hold.

The Radiation Density Gradient II

I want to correct and extend a point raised in the thought experiment of the previous post. Under consideration, a blackbody the size and density of the sun placed in the InterGalactic Medium. Such a body should come to equilibrium with the Ambient Cosmic Electromagnetic Radiation. The ACER has never been properly accounted for in its entirety, though a preliminary effort has been made,

The blackbody, being in equilibrium, is emitting as much energy as it is receiving and it therefore has an energy density gradient surrounding it consisting of the outbound radiation which drops off in density as 1/r2, the radial distance from the center of the blackbody. The underlying ACER density (presumed approximately constant) does not change with distance and may well be considered an inertial field.

Now we flip the fusion switch and make the blackbody a more realistic astronomical object, a star. Compared to the blackbody, this star has a relatively enormous radiation density gradient consisting of all the omnidirectionally emitted radiation produced by the star. The density of that radiation will again drop off as 1/r2.

When a remotely sourced electromagnetic wave passes close by a star the wave is observed to curve as if it were traversing a medium with a density gradient. This is commonly attributed to a gravitational field though such a field has never been observed.

What is observed are a curvature of light and a radiation density gradient. It strains credulity to believe that those two physical facts are uncorrelated. This in turn suggests that observed gravitational effects, commonly attributed to an unobserved gravitational field, are in fact a consequence of matter-electromagnetic energy interactions.

The Radiation Density Gradient

A thought experiment: Imagine a spherical blackbody the size and density of the sun. Now place that body deep in the InterGalactic Medium. In such a location a blackbody will be absorbing all of the Ambient Cosmic Electromagnetic Radiation that flows continuously through any location in the IGM. Eventually we should expect the sphere to settle into a state of equilibrium with the ACER, continuously absorbing and reemitting all the energy it receives.

This means that there must exist a Radiation Density Gradient surrounding the blackbody consisting of all the inbound and outbound radiation. By normal geometric considerations we should expect the gradient to drop off as 1/R2, R being the radius.

The question is, just how much energy is that? The answer depends on the surface area of the sphere, which is easy to calculate, and the energy density of the ACER which doesn’t seem to be cleanly known, Various frequency bands have been observed to varying degrees. This recent survey of the Cosmic spectrum as a whole suggests that the aggregate nature, behavior and effect of the ACER has not been properly and fully assessed.

Given some reasonable value for the ACER density it would be possible to estimate the RDG of the sphere. With a reasonable value for the RDG a calculation could then be made, using optical considerations, of the curvature to be expected for the path of a light ray passing through the RDG, feeding that curved light back into the RDG calculation. The result could then be compared to the standard curvature predicted by General Relativity but first the additional energy directly radiated by the sun would have to be factored in. Something to think about. Somebody else will have to do the math though. That’s not my gig.

Bohmian Mechanics & A Mathematicism Rant

26JUN22 Posted as a comment to this Quanta article. Probably won’t be published there until tomorrow.

Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement.

This is just the usual Copenhagen mush, that substitutes a feckless by-product of the pseudo-philosophy of mathematicism for scientific rigor. The rather irrational, anthropocentric view that a particle doesn’t have a position until it is measured is entirely dependent on the belief that the Schrodinger wavefunction is a complete and sufficient description of physical reality at the quantum scale. It is not.

The Schrodinger equation only provides a statistical distribution of the possible outcomes of a measurement without specifying the underlying physical processes that cause the observed outcomes. The only scientifically reasonable conclusion would seem to be that the equation constitutes an incomplete description of physical reality. The Copenhagen interpretation of QM – that the wavefunction is all that can be known is not a rational scientific viewpoint. In physical reality things exist whether humans observe them or not, or have models of them or not.

There has long been a known alternative to the wavefunction only version of QM that was championed by John Bell himself. In Bohmian mechanics, in addition to the wavefunction, there is a particle and and a guiding wave. In that context the wavefunction provides the outcome probabilities for the particle/guiding-wave interactions. Bohmian mechanics constitutes a sound scientific hypothesis; the Copenhagen interpretation (however defined) offers nothing but incoherent metaphysical posturing. As in:

The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ion’s measurement.

So what is that supposed to mean exactly, in physical terms? The measurement of one ion doesn’t affect the second ion but it does alter the outcome of the second ion’s measurement? But what is being measured if not the state of the second ion? The measurement of the second ion has changed but the ion itself is unaltered, because the “context” changed? What does that mean in terms of physics? Did the measurement apparatus change but not the second particle? It’s all incoherent and illogical, which is consistent with the Copenhagen interpretation I suppose, but that’s not saying much.

Bohmian mechanics makes short work of this matter. There are two charged particles, each with a guiding wave; those guiding waves interact in typical wave-like fashion in the 4 dimensional frame of electromagnetic radiation. The charged particles are connected in, by, and across that 4D frame. By common understanding, such a 4D frame has no time dimension. That is what accounts for the seemingly instantaneous behavior. That’s physics, it could be wrong, but it is physics. Contextuality is math; math is not physics.

Theoretical physics in its current guise is an unscientific mess because physical reality has been subordinated to mathematical and metaphysical considerations. And so we have the ongoing crises in physics in which the standard models are so discordant with physical reality in so many different ways that it seems difficult to say what precisely is wrong.

The simple answer is that you can’t do science that way. Attempting to build physics models outward from the mathematical and metaphysical realms of the human imagination is wrong. Basing those models on 100 year old assumptions and holding those outdated assumptions functionally inviolable is wrong.

Science has to be rooted in observation (direct detection) and measurement. Mathematics is an essential modeling tool of science but it is only a tool; it is not, of itself, science.

Theoretical physics, over the course of the 20th century, devolved into the study of ever more elaborate mathematical models, invoking ever more elaborate metaphysical conjectures of an invisible realm knowable only through the distorted lenses of those standard models.

Physical reality has to determine the structure of our theoretical models. Currently theorists with elaborate models dictate to scientists (those who observe, measure, and experiment) that they must search for the theoretical components of their models, or at least some signs and portents thereof. Failure to find the required entities and events does not constitute a failure of the model however, only a failure, thus far, of detection (dark matter, etc.) or the impossibility of detection (quarks, etc.). In any event the models cannot be falsified; they need only be modified.

Modern theoretical physics is an exercise in mathematicism and it has come to a dead end. That is the crisis in physics and it will continue until the math first approach is abandoned. It is anybody’s guess when that will happen. The last dark age of western science persisted for a millennium.

The Mathematicist’s Tale

19Jun22 The following was posted as a comment to this Medium post by Eric Siegel. It has been slightly edited.

This is a good overview of the illogical mathematicism at the root of the inane Big Bang model. The Friedmann equation is indeed the headwaters of all the nonsense that currently engulfs and overwhelms any possibility of a meaningful scientific account of the Cosmos.

The Friedmann equation rests not only on the two simplifying assumptions of mathematical convenience, isotropy and homogeneity, the author cites. There was also an unstated but implicit assumption that the Cosmos could be treated as a unified, coherent, simultaneous entity, a Universe.

Further it was assumed that the field equations of General Relativity, derived in the context of the solar system, could be stretched to encompass this imaginary Universe. The results speak for themselves.

The standard model of cosmology is an incoherent, unscientific mess. It claims that once upon a time 13.8 billion years ago the entirety of the Cosmos was compressed into a volume smaller than a gnat’s ass that began expanding for an inexplicable reason, then accelerated greatly due to the fortuitous intervention of an invisible inflaton field, which set invisible spacetime expanding at just the right rate to arrive at the current state of the Universe, which we are told is 95% composed of some invisible matter and energy that has no other purpose than to make the model agree with observations; the remaining 5% of this Universe is the stuff we actually observe.

To be clear, all of the entities and events the standard model describes are not part of the Cosmos we actually observe except for that rather trivial 5% of the BB Universe. That 5% of the model is the Cosmos we observe.

Science is the study of those things that can be observed and measured. At some point in the mid 20th century theoretical physicists adopted the conceit that science was the study of mathematical models. This categorical error was compounded by treating scientific hypotheses such as “universal expansion” as axioms (true by definition). In science, all truths are provisional; nothing can be held true by definition.

Axioms belong to the mathematical domain. Math is not science and the mathematicist belief that math underlies and determines the nature of physical reality has no scientific basis. That the Cosmos is a unified, coherent and simultaneously existing entity can only be true if the Cosmos had a miraculous simultaneous origin – the Big Bang,

The problem with miracles is that they are not scientific in nature; they cannot be studied only believed in. Believing in the Big Bang and weaving a fantastical account of an imaginary Universe is a mathematical/metaphysical endeavor that has nothing to do with science or physical reality.

Imaginary Universe? That’s right, the Universe of the Big Bang has no basis in any known physics. In the physics we know, there is a finite, maximum limit to the speed of electromagnetic radiation – 3×10^8 meters per second (186,000 miles per hour). It is via electromagnetic radiation that we observe the Cosmos.

The galaxy nearest to our own is Andromeda; it is detectable by the unaided eye as a fuzzy patch in the night sky. Andromeda is @ 2.5 million light years from Earth. Everything we know about Andromeda is 2.5 million years out of date, of its current state we do not have and cannot have any direct knowledge.

At the far extent of our observational range we observe galaxies that are @10 billion light years away. We do not have and cannot have any knowledge of the current state of any of those galaxies. The same argument holds for all the other galaxies we observe.

Galaxies are the primary constituents of the Cosmos we observe. It follows, therefore, that we do not and cannot have knowledge of the Cosmos’ current state. The very idea of the Cosmos having a current state is scientifically meaningless. Mathematicists believe otherwise:

If you want to understand the Universe, cosmologically, you just can’t do it without the Friedmann equation. With it, the cosmos is yours.


See that’s all you need, a simple mathematical model derived 100 years ago by “solving” the field equations of General Relativity (which does not have a universal frame) for a simplistic toy model of the Cosmos (that does have a universal frame). Suddenly you can know everything about everything, even things you can’t possibly observe or measure. That’s mathematicism hard at work creating an imaginary Universe for mathematicists to believe in. The standard model of cosmology has no scientific basis; it is a nerd fantasy and it is scientifically useless.

The Sorry State Of Particle Physics

This paper had its 15 minutes of fame (or infamy) a few weeks back. It’s quite the piece of work, claiming as it does to have made a high precision measurement of the mass of a W particle. The authors (there appear to be more authors than sentences in the paper) fail to explain, of course, how it is possible to measure the mass of a particle that has never been observed.

No explanation is necessary, of course, if you are a member of the peculiar cult of mathematicism that believes modern mathematical models determine the nature of physical reality. If you are of the more realistic opinion that physical reality should determine the nature of our mathematical models you will be considered hopelessly naive by the cult and quite possibly a crackpot. That the cult of mathematicism has made an incoherent and absurd mess of theoretical physics is as factually undeniable as it is flamboyantly denied.

For the record, there has never been a direct detection of a W particle. According to the standard model of particle physics (SM) the half-life of the W boson is 3×10-25s, a time increment so small as to render the W particle’s existence (in physical reality) indistinguishable from its non-existence. In the fantasy realm of the mathematicist this is not a problem; the W particle exists in the model and therefore it must exist in physical reality even if it is undetectable.

So how then can the authors claim to have made a high precision measurement of an invisible particle? The claim is hyperbole; no high precision measurement of a nonexistent W boson has been made. What has transpired is simply a lot of data massaging of pre-exiting results of collider experiments run between 2002-2011 in which the actual observations of quotidian particles like electrons and muons are treated as by-products of the decay of the unobservable W within the context of the SM that assumes the W’s existence.

If you wade through the paper’s turgid account of all the data massaging that went into this bogus claim of a precision measurement you will encounter a single sentence that gives the game away:

The W boson mass is inferred from the kinematic distributions of the decay leptons.

And that is the bottom line here, a W boson mass inferred from the observation of electrons and muons (leptons) in the context of a model that assumes (against the evidence) the W boson’s existence, is not in any meaningful, scientific sense a high precision measurement. The claim that it is is simply, in the technical sense, high order bullshit.

Why The Cosmos Is Not A Universe

One of the fundamental assumptions underlying the standard model of cosmology, commonly known as the Big Bang, is that the Cosmos comprises a unified, coherent and simultaneous entity – a Universe. It is further assumed that this “Universe” can be mathematically approximated under that unitary assumption using a gravitational model, General Relativity, that was devised in the context of our solar system. At the time that assumption was first adopted the known scale of the Cosmos was that of the Milky Way, our home galaxy, which is orders of magnitude larger and more complex than the solar system.

Subsequently as the observed Cosmos extended out to the current 13 billion light year range, it has become clear that the Cosmos is orders of magnitude larger and more complex than our galaxy. The resulting Big Bang model has become, as a consequence, absurd in its depiction of a cosmogenesis and ludicrous in its depiction of the “current state of the “Universe“, as the model attempts to reconcile itself with the observed facts of existence.

It will be argued here that the unitary conception of the Cosmos was at its inception and is now, as illogical as it is incorrect.

I Relativity Theory

The unitary assumption was first adopted by the mathematician Alexander Friedmann a century ago as a simplification employed to solve the field equations of General Relativity. It imposed a “universal” metric or frame on the Cosmos. This was an illogical or oxymoronic exercise because a universal frame does not exist in the context of Relativity Theory. GR is a relativistic theory because it does not have a universal frame.

There is, of course, an official justification for invoking a universal frame in a relativistic context. Here it is from a recent Sabine Hossenfelder video:

In general relativity, matter, or all kinds of energy really, affect the geometry of space and time. And so, in the presence of matter the universe indeed gets a preferred direction of expansion. And you can be in rest with the universe. This state of rest is usually called the “co-moving frame”, so that’s the reference frame that moves with the universe. This doesn’t disagree with Einstein at all.

The logic here is strained to the point of meaninglessness; it is another example of the tendency of mathematicists to engage in circular reasoning. First we assume a Universe then we assert that the universal frame which follows of necessity must exist and therefore the unitary assumption is correct!

This universal frame of the BB is said to be co-moving and therefore everything is supposedly OK with Einstein (meaning General Relativity, I guess) too, despite the fact that Einstein would not have agreed with Hossenfelder’s first sentence; he did not believe that General Relativity geometrized gravity, nor did he believe in a causally interacting spacetime. The universal frame of the BB model is indeed co-moving in the sense that it is expanding universally (in the model). That doesn’t make it a non-universal frame, just an expanding one. GR does not have a universal frame, co-moving or not.

Slapping a non-relativistic frame on GR was fundamentally illogical, akin to shoving a square peg into a round hole and insisting the fit is perfect. The result though speaks for itself. The Big Bang model is ludicrous and absurd because the unitary assumption is wrong.

II The Speed of Light

The speed of light in the Cosmos has a theoretical maximum limit of approximately 3×108 meters per second in inertial and near-initial conditions. The nearest star to our own is approximately 4 light years away, the nearest galaxy is 2.5 million LY away. The furthest observed galaxy is 13.4 billion LY.* This means that our current information about Proxima Centauri is 4 years out of date, for Andromeda it is 2.5 million years out of date, and for the most distant galaxy 13.4 billion years out of date.

Despite this hard limit to our information about the Cosmos, the BB model leads cosmologists to perceive themselves capable of making grandiose claims, unverifiable of course, about the current state of the Cosmos they like to call a Universe. In the BB model’s Universe it is easy to speak of the model Universe’s current state but this is just another example of the way the BB model does not accurately depict the Cosmos we observe.

In reality we cannot have and therefore, do not have, information about a wholly imaginary global condition of the Cosmos. Indeed, the Cosmos cannot contain such knowledge.

Modern cosmologists are given to believing they can have knowledge of things that have no physical meaning because they believe their mathematical model is capable of knowing unknowable things. That belief is characteristic of the deluded pseudo-philosophy known as mathematicism; it is a fundamentally unscientific conceit.

III The Cosmological Redshift

The second foundational assumption of the BB model, is that the observed redshift-distance relationship found (or at least confirmed) by the astronomer Edwin Hubble in the late 1920s was caused by some form of recessional velocity. Indeed, it is commonly stated that Hubble discovered that the Universe is expanding. It is a matter of historical record, however, that Hubble himself did not ever completely agree with that view:

To the very end of his writings he maintained this position, favouring (or at the very least keeping open) the model where no true expansion exists, and therefore that the redshift “represents a hitherto unrecognized principle of nature”.

Allan Sandage

However, for the purposes of this discussion, the specific cause of the cosmological redshift does not matter. The redshift-distance relation implies that if the Cosmos is of sufficient size there will always be galaxies sufficiently distant that they will lie beyond the observational range of any local observer. Light from those most distant sources will be extinguished by the redshift before reaching any observer beyond the redshift limited range of the source. Even in the context of the BB, it is generally acknowledged that the field of galaxies extends beyond the potentially observable range.

The extent of the Cosmos, therefore, is currently unknown and, to three dimensionally localized observers such as ourselves, inherently unknowable. A model, such as the BB, that purports to encompass the unknowable is fundamentally unscientific; it is, by its very nature, only a metaphysical construct unrelated to empirical reality.

IV The Cosmos – A Relativistic POV

Given the foregoing considerations it would seem reasonable to dismiss the unitary conception of the Cosmos underlying the Big Bang model. The question then arises, how should we think of the Cosmos?

Unlike scientists of 100 years ago when it was an open debate whether or not the nearby galaxies were a part of our galaxy, modern cosmologists have a wealth of data stretching out now to 13 billion light years. The quality and depth of that data falls off the farther out we observe however.

The Cosmos looks more or less the same in all directions, but it appears to be neither homogenous nor isotropic nor of a determinable, finite size. That is our view of the Cosmos from here on Earth; it is our point of view.

This geocentric POV is uniquely our own and determines our unique view of the Cosmos; it is a view that belongs solely to us. Observers similarly located in a galaxy 5 billion light years distant from us would see a similar but different Cosmos. Assuming similar technology, looking in a direction opposite the Milky Way the distant observers would find in their cosmological POV a vast number of galaxies that lie outside our own POV. In looking back in our direction, the distant observer would not see many of the galaxies that lie within our cosmological POV.

It is only in this sense of our geocentric POV that we can speak of our universe. The contents of our universe do not comprise a physically unified, coherent and simultaneously existing physical entity. The contents of our universe in their entirety are unified only by our locally-based observations of those contents.

The individual galactic components of our universe each lie at the center of their own local POV universe. Nearby galaxies would have POV universes that have a large overlap with our own. The universes of the most distant observable galaxies would overlap less than half of our universe. Those observers most distant and in opposite directions in our universe would not exist in each others POV.

So what then can we say of the Cosmos? Essentially it is a human conceptual aggregation of all the non-simultaneously reviewable matter-energy systems we call galaxies. The field of galaxies extends omni-directionally beyond the range of observation for any given three dimensionally localized observer and the Cosmos is therefore neither simultaneously accessible nor knowable. The Cosmos does not have a universal frame or a universal clock ticking. As for all 3D observers, the Cosmos tails away from us into a fog of mystery, uncontained by space, time, or the human imagination. We are of the Cosmos but can not know it in totality because it does not exist on those terms.

To those who might find this cosmological POV existentially unsettling it can only be said that human philosophical musings are irrelevant to physical reality; the Cosmos contains us, we do not contain the Cosmos. This is what the Cosmos looks like when the theoretical blinders of the Big Bang model are removed and we adopt the scientific method of studying the things observed in the context of the knowledge of empirical reality that has already been painstakingly bootstrapped over the centuries by following just that method.

___________________________________________________

* In the funhouse mirror of the Big Bang belief system, this 13.4 GLY distance isn’t really a distance, it’s the time/distance the light traveled to reach us. At the time it was emitted according to the BB model the galaxy was only 2.6 GLY distant but is “now” 32 GLY away. This “now“, of course, implies a “universal simultaneity” which Relativity Theory prohibits. In the non-expanding Cosmos we actually inhabit, the 13.4 GLY is where GN-z11 was when the light was emitted (if our understanding of the redshift-distance relation holds at that scale.) Where it is “now” is not a scientifically meaningful question because it is not an observable and there is. in the context of GR, no scientific meaning to the concept of a “now” that encompasses ourselves and such a widely separated object.

Adventures in Theoretical Physics II – Fun with General Relativity

Well here’s a cute little video that manages to do a good job of conveying just how daft and detached from reality theoretical physics has gotten over the last century:

The first 11 minutes or so are effectively a sales pitch for one of the structural elements of the Big Bang Model – Spacetime. The deal is, you’re supposed to believe that the force of gravity is not really there – nothing is holding you to the surface of the earth, rather the earth is accelerating upward and pushing against you.

And the reason this is happening is that you are not following a curved path in –Spacetime, because according to the video you are being knocked off of that curved path by the earth that is accelerating upwards and you are in the way and that’s gravity, tada! How do we know this? Well that’s obvious, it’s in the math and the math tells reality what’s going on and if reality doesn’t like it, that’s too bad. So don’t go trusting your lying eyes, alright.

In addition to Spacetime, this fairy tale is predicated on a ridiculous over-extension of the Principle of Equivalence that Einstein used in developing Special Relativity. Einstein was very clear that the POE applied only under the severely constrained circumstances of a thought experiment. His main purpose seems to have been to provide a physical interpretation for the observed equivalency between gravitational and inertial masses. Einstein presented the POE as informing his ideas about gravity.

The video ignores Einstein’s constraints and pretends the POE is fundamental to General Relativity, so it winds up insisting that things that are obviously not true in physical reality, are, nonetheless, true simply because the math can be framed that way – your lying eyes be damned.

We are told that a man falling off a roof is in the exact same situation as an observer in a non-accelerating rocket ship far from any gravitating body. This claim is made even though it is obviously not true; the falling man will be injured, if not killed, when he hits the ground, whereas no such fate will befall the observer in the rocket ship.

So the idea is, until the falling man meets his unfortunate fate, the situation is the same and therefore both situations are the same, the different outcomes not withstanding – because the math is the same. Observers free falling in orbit won’t be able to tell they’re not in an inertial frame – unless they look out the window, so that’s just like being in an inertial frame too. Right, of course.

In a similar vein, the video insists that an observer in a rocket accelerating at 9.8 m/s^2 will not be able to tell the difference between that situation and standing on the surface of the earth. The presenter fails to mention however, that only holds true as long as the observer doesn’t observe out the window, which will alert the observer that the rocket and therefore the observer are not at rest on the surface of a large gravitating body and therefore the situation is not comparable to standing at rest on the surface of the earth. Also, if any observer steps off the rocket, they will be left behind as the rocket accelerates away. But nevertheless, it’s all the same – as long as no one looks out the window, and maybe you remember that the earth is actually accelerating upwards under your feet, like the floor of the rocket. Sure, of course.

For the sake of introducing some sanity in this matter, here is Einstein on the POE. Note that the second paragraph completely contradicts the claims made in the video implying the equivalence of all inertial and non-inertial frames.

We must note carefully that the possibility of this mode of interpretation rests on the
fundamental property of the gravitational field of giving all bodies the same acceleration, or, what comes to the same thing, on the law of the equality of inertial and gravitational mass…

Now we might easily suppose that the existence of a gravitational field is always only an apparent one. We might also think that, regardless of the kind of gravitational field which may be present, we could always choose another reference-body such that no gravitational field exists with reference to it. This is by no means true for all gravitational fields, but only for those of quite special form. It is, for instance, impossible to choose a body of reference such that, as judged from it, the gravitational field of the earth (in its entirety) vanishes.

RELATIVITY THE SPECIAL AND GENERAL THEORIES, ALBERT EINSTEIN, authorized translation by Robert W. Lawson, original version 1916, translated 1920, appendices 3 and 4 added 1920, appendix 5 added to English translation 1954

It is clear from this statement that the POE of Einstein’s thought experiment is the Galilean version, commonly referred to nowadays as the “Weak” POE. The so-called “Einsteinian” and “Strong” POEs of modern cosmology are post-Einstein formulations attributed initially to Robert Dicke, though there were doubtless others who perpetrated and embellished this nonsense. Neither extension of the POE has anything to do with the foundations of Einstein’s Relativity Theory. It is those mid-20th century extensions that are misleadingly presented in the video as fundamental features of General Relativity.

The POE, in its current, extended usage, is mostly just a conjecture of mathematical convenience, allowing theorists to use Special Relativity math instead of the more difficult General Relativity formulations. It also results in a theoretical claim that the speed of light in a vacuum is a universal constant. That claim contradicts both GR which predicts that the speed of light varies with position in a gravitational field and observations which confirm that prediction.

This unwarranted belief that the speed of light is a universal constant has also produced a cottage industry of theorists expounding a theory of undetected structures called Black Holes with the physically absurd properties of an event horizon and a singularity. No such structures exist. The relativistic slowing of light in a gravitational field precludes their existence. It does not preclude the existence of massive high-density objects.

Ok, let’s grant that this video presentation is of dubious scientific quality and does not, perhaps, represent the consensus view of the scientific community, particularly with regard to the so-called Principle of Equivalence, although if not the consensus, the Strong POE certainly commands significant support by a majority of theoretical cosmologists . The usual suspects will whine, of course, that pop-science presentations like this video cannot be trusted.

That complaint is also lodged against anything written for a general audience, even when the author is a fully accredited scientist with a relevant FAS (full alphabet soup) after their name. If it’s written so non-experts can understand it, then it is, on some level, wrong.

The reason for this situation is straightforward: much of what theoretical physicists believe cannot be translated into clear, logical, statements of scientific fact. What you get instead is confident handwaving consisting of metaphysical assertions that have no factual basis in empirical reality and a lot of math. According to theorists this is because theoretical physics can only be properly understood by those steeped in years of study of the underlying mathematical esoterica that informs only the truly knowledgeable. To which the only proper retort is: math is not physics and if your math cannot be translated into empirically verifiable physical terms – then your math is inadequate to the task of being a proper scientific model of physical reality.

The modern POE is just a conjecture of mathematical convenience, nothing more. Nonetheless, this modern POE permeates and perverts the scientific literature. Here is an Encyclopedia of Britannica entry for the POE:

In the Newtonian form it asserts, in effect, that, within a windowless laboratory freely falling in a uniform gravitational field, experimenters would be unaware that the laboratory is in a state of nonuniform motion. All dynamical experiments yield the same results as obtained in an inertial state of uniform motion unaffected by gravity. This was confirmed to a high degree of precision by an experiment conducted by the Hungarian physicist Roland Eötvös. In Einstein’s version, the principle asserts that in free-fall the effect of gravity is totally abolished in all possible experiments and general relativity reduces to special relativity, as in the inertial state.

Britannica, The Editors of Encyclopaedia. “Equivalence principle”. Encyclopedia Britannica, 31 Mar. 2019, https://www.britannica.com/science/equivalence-principle. Accessed 6 June 2021.

It should be noted that, according to the encyclopedia’s referenced article on Roland Eötvös, his experiment “… resulted in proof that inertial mass and gravitational mass are equivalent…“, which is to say, that it demonstrated the Weak POE only. It is also clear, that the authors of this entry are confused about the distinctions between the three POEs. But what of that; it’s only an encyclopedia trying to make sense of the nonsensical world of the modern theoretical physicist and modern theoretical physics is an unscientific mess.

Adventures in Theoretical Physics I – Entropy

Updated 3Jun21

Entropy is one of those slippery concepts floating around in theoretical physics that defies easy explication. The following quote is from a small subsection of an extensive Wikipedia entry on entropy:

The following is a list of additional definitions of entropy from a collection of textbooks:

-a measure of energy dispersal at a specific temperature.

-a measure of disorder in the universe or of the availability of the energy in a system to do work.

-a measure of a system’s thermal energy per unit temperature that is unavailable for doing useful work.

In Boltzmann’s definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Consistent with the Boltzmann definition, the second law of thermodynamics needs to be re-worded as such that entropy increases over time, though the underlying principle remains the same.

Note that this is a “list of additional definitions”, there are seemingly innumerable others peppered throughout the article. It would be a hard to caricature the overall impression of incoherence, oozing like quicksand and sucking under any rational attempt to say concisely what, exactly, entropy is, beyond the usual handwaving about a vaguely specified measure of disorder.

The first sentence of the Wiki article states unequivocally that “Entropy is… a measurable physical property…”. That statement is false. In section 7.8 it is made clear that entropy cannot be measured directly as a property. Rather, discrete energy transfers are dropped into an equation that ostensibly “describes how entropy changes dS when a small amount of energy dQ is introduced into the system at a certain temperature T.” The only measurements made are of energy and temperature. Entropy is a made-up mathematical concoction; It is not a real property of physical reality.

The reader will also be pleased to note that the same article informs us that “In 1865, Clausius named the concept of “the differential of a quantity which depends on the configuration of the system,” entropy… “.

The real problem with the concept of entropy is not that it may be useful in doing some thermodynamic calculations. Clearly, it is of some mathematical value? What it does not have is any clear explanatory power or physical meaning, and well, math just isn’t physics. Nonetheless, entropy has been elevated to a universal, physical law – the Second Law of Thermodynamics, where the situation doesn’t get any less opaque. In fact the opacity doesn’t get any more opaque than this gem:

Nevertheless, this principle of Planck is not actually Planck’s preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy.

Section 2.9 of the 2nd Law wiki entry above

Here is Planck’s quote from “…a previous sub-section of the present section of this present article…”:

Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.

Of course, the only question that remains then is, what exactly is entropy?

The final word on the nature of entropy should be this comment by John von Neumann which appears in a sidebar of this section, wherein he advises Claude Shannon to call his information uncertainty concept entropy because “… In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.” Can’t argue with that logic. It appears that the definition of entropy is entropic. It is getting ever more disordered with the passage of time.

So what’s the problem? This is after all, a relatively trivial matter compared to the Grand Delusions of Theoretical Physics, like Dark Matter, Dark Energy, Quarks & etc. Or is it? Well, actually the Entropy Paradigm represents a major blind spot in fundamental physics. Theoretical physics has this misbegotten, laboratory based idea that all ordered systems are in some sense, “running down” or losing the ability to do work, or becoming more disordered or, or… something, called Entropy.

Entropy is all downhill, all the time. In the context of a designed laboratory experiment, the original, closed-system condition is some prearranged, ordered-state, that is allowed to “run down” or “become disordered” or whatever, over time and that, more or less is Entropy! Fine so far, but when you make Entropy a universal law – The Second Law of Thermodynamics, all is lost, or more exactly half of reality is lost.

Why? Because there are no closed systems outside the laboratory. What you get when you ignore that fact is this kind of bloated nonsense:

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations – then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. 

Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

Entropy is correct, if all it boils down to is, ordered, closed, systems will become disordered. But entropy doesn’t tell you how the systems came to be ordered in the first place. If you extrapolate from a closed-system experiment, a Universal Law that says, or implies, that all systems rundown, but you have no theory of how those systems came to be ordered in the first place, you have exactly half a picture of reality – because over here in the real world, ordered systems do arise from disorder.

It’s hard to do good physics if you have this half-blind view of reality. You start babbling about the Arrow of Time, the heat death of the Universe, not to mention whatever this is. You may have to invoke an unobservable Creation Miracle like the Big Bang to get the ball rolling.

Babble Update (3Jun21)

“If you want your clock to be more accurate, you’ve got to pay for it,” study co-author Natalia Ares, a physicist at the University of Oxford, told Live Science. “Every time we measure time, we are increasing the universe’s entropy.”

Livescience.com

Meanwhile, physical reality doesn’t work like that. Physical reality is locally cyclic – like life itself. There is, for instance, a considerable body of observational evidence suggesting that galaxies reproduce themselves. That body of evidence has been dismissed with spurious statistical arguments, and the well-established astronomer who made them, Halton Arp, was denied telescope time to continue making those observations. No one has since been permitted to pursue that line of observational research.

Meanwhile, there is also no comparable body of observational evidence for the Big Bang supported model of galaxy formation, via accretion, from a primordial, post-BB cloud. Like the rest of the Big Bang’s defining features, galaxy formation by accretion is not an observed phenomenon in the Cosmos.

So, an over-generalization (to the point of incoherence), of some simplistic closed-system laboratory experiments done in the 18th and 19th centuries, has come to aid and abet, the absurd Big Bang model of the Cosmos. Modern theoretical physics is an incoherent, unscientific mess.