Tag Archives: cosmology

Space Cadet

I have to give Ethan Siegel credit for having fully absorbed the catechism of orthodox cosmology in all its illogical, not to mention, unscientific, glory. Here, for example is a recent Forbes post entitled How Does The Fabric Of Spacetime Expand Faster Than The Speed Of Light? It provides a good illustration of Siegel’s inability to distinguish between the factual, and the unfounded theoretical aspects, of the standard model of cosmology. But then, that’s the nature of the orthodox belief system known, somewhat sarcastically, as the Big Bang.

For instance, the article mentions the maximum light speed in a vacuum but fails to note that the maximum can only be achieved in an inertial frame. Inertial frames are only approximated in physical reality. That little qualification completely unravels this bit of received wisdom:

But light didn’t work that way; it always moves at the same speed through the vacuum of empty space, from every perspective imaginable.

That, in fact, represents a rather monumental failure of the modern cosmological imagination, forsaking as it does, both Einstein’s understanding of Relativity Theory and the observational evidence. Not to mention the fact that there is no such thing as “empty space“. But then, facts are optional in the post-empirical realm of theoretical cosmology.

Energy Loss And The Cosmological Redshift

Modern cosmology has an interesting approach to the question of where the energy lost by light goes as that light becomes redshifted over large cosmological distances. There seem to be several, not entirely coherent ways of approaching the question. Probably the most conventional approach is this, where Ethan Siegel seems to say, a “universal” expansion is causing the energy loss, but at the same time, the energy lost is driving the expansion of the “Universe” :

The photons have an energy, given by a wavelength, and as the Universe expands, that photon wavelength gets stretched. Sure, the photons are losing energy, but there is work being done on the Universe itself by everything with an outward, positive pressure inside of it!

It is all well and good, to claim that mathematically either view is correct, however, physically speaking, either claim is nonsense. One view posits a bounded, constrained, “Universe” against which the contents are doing work by pushing the boundary outward. There is no evidence for this peculiar viewpoint other than you can arrive at it by assuming, without question, certain things the standard model assumes – without question. It is a mathematicist argument with no basis in the physics of observed phenomena.

The other possibility from the mathematicist’s perspective, is that an otherwise imperceptible, but somehow substantive, spacetime is expanding and so stretching the wavelength of the photon. Again, this has no basis in observed phenomena; it is just an appeal to a purely theoretical construct, one which only serves to obscure whatever physics is actually taking place.

So, how can you account for the energy loss of a cosmologically redshifted photon without veering off into the metaphysical nerd-land presented by mathematicism? Simply put, you have to model light emitted from distant sources (galaxies on the cosmological scale) as consisting of expanding spherical wavefronts. Those wavefronts can be thought of as hyperspheres which are related to the mathematical-geometrical concept of a light cone.

The expanding spherical wavefront view of cosmological radiation rests only on the realistic assumption that galaxies radiate omnidirectionally. This is, in terms of modern cosmology, a simplifying assumption, one which eliminates the need for an expanding “Universe”. It is this expanding spherical wavefront phenomenon that has been misinterpreted in the standard model, to imply a “universal” expansion. The only things undergoing expansion in the Cosmos are the expanding spherical wavefronts of light emitted by galaxies.

The theoretical, light cone concept, closely parallels the physical structure of an expanding spherical wavefront considered 4-dimensionally. In the theory of light cones the surface of the light cone is the aggregate of all the hyperspheres which are themselves, sections of the light cone. All of the points on the light cone have no spatial or temporal separation. The 4-dimensional surface of the light cone constitutes a simultaneity – in 4-dimensions.

Deploying this mathematical model in the context of the observed expanding wavefronts suggests that when a 3-dimensional observer interacts with the light cone, at a specific 3-spatial + 1-temporal, dimensional location, the observer is observing the simultaneous state of the entire expanding wavefront, which, in the model, is the state of the hypersphere of the light cone that intersects with the observer’s 3+1-dimensional “now”.

A 3D observer cannot observe the entirety of the 4D light cone, only the portion of hypersphere (spherical wavefront) with which it directly interacts. However since all the points on the hypersphere are identical, information about the state of the hypersphere is available at the point of observation.

At large cosmological distances that wavefront will be observed to be redshifted, reflecting a net energy loss to the spherical wavefront. This energy loss is caused by the wavefront’s encounter with and partial absorption by, all the intervening galaxies (and other matter) encountered over the course of its expansion.

This energy loss can be crudely estimated using the standard General Relativity equation for gravitational redshifting. That, in turn, suggests the possibility that all observed gravitational effects are a consequence of the interaction between 3-dimensional matter and 4-dimensional electromagnetic radiation.

This post is based on a recent Quora answer.

Great Snipe Hunt Continues

You can always count on Dennis Overbye of the New York Times to run full-tilt with any Big Science (BS) press release that lands in his inbox. The latest example concerns yet another in the ongoing stream of “possible detections” in the unending snipe hunt semi-officially known as the Search for Dark Matter.

Dark matter has been experimentally sought for roughly 40 years; no convincing evidence for its existence has been uncovered in all that time. This latest bit of effusive hype (that emanates, according to Overbye from, …a multinational team of 163 scientists from 28 institutions and 11 countries…), continues the now long string of dark matter non-detections. Supposedly, the experiment under consideration was seeking a certain type of dark matter called WIMPs. The experiment failed in that regard, but hope springs eternal in the Dark Matter community.

In the elaborate and needless to say, expensive, detector, an excess of “events” was observed, which could be, it is now claimed, evidence of “axions“, or maybe another hypothetical, a “neutrino magnetic moment”, or sadly but more prosaically, the presence in undetectable amounts of tritium, a well-known (and observed) isotope of hydrogen.

It should be noted that according to Overbye:

Simulations and calculations suggested that random events should have produced about 232 such recoils over the course of a year.

But from February 2017 to February 2018, the detector recorded 285, an excess of 53 recoils.

This over-represents the excess. The original paper has the expectation value as 232 +/- 15 meaning the excess events only number 38. The 285 total does not constitute direct evidence of anything but the “events” themselves. They constitute the entire data set from a year of observations on a one-off, purpose-built system that has failed in its intended purpose.

So we have a paucity of irreplicable evidence being repurposed to justify some spare theoretical fantasies that just happen be lying around unused in in the back corner of some dusty theoretical broom closet. On the other hand, “…tritium contamination will just be one more detail that has to be considered or calibrated in future detectors“. Another day, another retrenchment.

The overall effort here is of a piece with the Higgs boson “detection” at the Large Hadron Collider and the gravitational wave “detections” at LIGO. It is nothing but an exercise in data manufacturing and manipulation as substitute for empirical science. It is the essence of BS, and this science by committee approach continues to make a mess of modern physics.

As to the perpetual snipe hunt, there is one clear message being delivered to the BS community by physical reality. It was as true 40 years ago as it is today: That Dark Matter shit ain’t here! When this simple fact is going to penetrate the thick wall of mathematicism shielding theoretical physicists from reality, however, is a question that only future historians will be able to answer.

The Big Bang Hustle

(Correction appended 2020-06-21)

Comes before us then, the cosmologist duo of Luke A. Barnes and Geraint F. Lewis, a couple of Aussies, with a Universe they want to sell you. The pitch, as presented in this promo for their new book, is that the Big Bang explains certain observations that cannot be explained by any competing model and any competing model must explain them. The argument rests on three points that aren’t so much about observations, but in two cases are merely superficial claims to explaining things that are essentially self-explanatory in terms of known physics, without invoking the big bang. The third point rests on the unjustified contention that the big bang interpretation is the only acceptable, existing explanation for the cosmological redshift.

The three points discussed in the video and their fundamental weaknesses are:

Olber’s Paradox – Olber’s paradox is inherently resolved by the cosmological redshift and this resolution does not depend on the Big Bang assumption that the redshift is a consequence of some form of recessional velocity. The cosmological redshift exists and if the cosmos is of sufficient size there will be a distance beyond which light from a galaxy is redshifted to a null energy state or, more likely, will be reduced to a minimum state where it will pool with all other such distant sources to form a microwave “background” from the point of view of a local observer.

The Cosmological Redshift – That the redshift is caused by some form of recessional velocity, from which cosmologists reason backwards to the Big Bang event, is simply an assumption of the standard model. If the assumption is rather that what expands are the spherical wavefronts of light being emitted by galaxies, you get a picture that is consistent with observations and which can be treated with a standard General Relativity formalism to yield a cosmological redshift-distance relationship – no big bang, with its ludicrously inexplicable original condition, required.

Nucleosynthesis – The cosmic abundances of the elements are what they are. BB theorists have simply imported nuclear theory and fit it to their model. Model fitting is a technique as old as Ptolemy. As the linked article makes clear there are many possible pathways to nucleosynthesis. The BB version also doesn’t even work that well.

Don’t know much about dark matter and dark energy… Barnes and Lewis kind of joke about this, which is appropriate I guess, since according to the standard model, 95% of the model’s Universe is composed of this unobserved and, by their account, poorly understood stuff. Which brings us to one of the central conceits of the video and, presumably, the book it is promoting, which is that the standard model “explains” satisfactorily, all of our cosmological observations.

The problem with the standard model’s “explanations” for observed phenomena is precisely that they are not scientifically satisfactory. They are not satisfactory because “explanations” such as dark matter, and dark energy are not empirically resolvable. Dark matter and dark energy are necessary to the standard model, in order to fit the model to observations. Physical reality, however, has no use for either, in as much as, no evidence for their physical existence can be found beyond this urgent need to somehow fit the standard model to observations.

By invoking, as necessary components of physical reality, entities for which there is no observational evidence, modern cosmology has descended into a pre-Enlightenment medieval mind-set. Dark matter and dark energy are just the modern, secular version of angels and devils dancing on the head of a pin. Their scientific, “explanatory” power in nil.

The video also features such an egregious scientific error as to be almost breathtaking. At the 5:50 minute mark, in an attempt to justify their claim that only the expanding universe interpretation of redshift accounts for Olber’s Paradox, Barnes makes the completely fallacious argument that the more distant a luminous object is, its apparent size diminishes, but not its apparent brightness. That is simply wrong.

Even the most perfunctory treatment of light theory would include a discussion of the luminosity-distance relationship. That someone with a Phd (I presume in science), would choose to ignore basic scientific knowledge in order to advance a spurious claim about the explanatory power of the standard model, says all you need to know about the state of modern cosmology. It’s a mess.

Correction (2020-06-21): In a conversation with Louis Marmet over at A Cosmology Group, he pointed out that technically Barnes was correct; he was speaking of surface brightness, which does not change with distance for an extended object. Stars, however, are not resolvable as extended objects by the human eye, only by the largest telescopes. The apparent magnitude of point-like sources, such as stars varies according to the luminosity-distance relationship. Barnes’ argument remains false – on its own terms.

Forbes’ Physics Follies & The Broken Clock Syndrome

Sure enough, it had to happen. Ethan Siegel finally wrote a column for Forbes that isn’t just a thoughtless regurgitation of, and apologia for, the inanities of modern theoretical physics. In fact, it’s a column I can wholeheartedly endorse. It should be required reading for all physics students. A copy should be handed out with every science diploma. Here is the key takeaway:

Mathematics wasn’t at the root of the physical laws governing nature; it was a tool that described how the physical laws of nature manifested themselves. The key advance that happened is that science needed to be based in observables and measurables, and that any theory needed to confront itself with those notions. Without it, progress would be impossible.

Those are fine scientific sentiments, indeed! Unfortunately, there is no evidence of these noble precepts being applied in any of Siegel’s now numerous scientific writings for Forbes, or at least not in any I have read. There we find only the unthinking mathematicism that has turned modern theoretical physics into a caricature of bad science.

There is no more reliable purveyor of the modern scientific orthodoxy than Ethan Siegel. His Forbes column, Starts With A Bang, with the exception noted above, relentlessly flogs theoretical assertions that flagrantly violate the principles quoted above. Even in cases where he gets the orthodoxy completely wrong, he and his editors simply plow ahead, barely acknowledging an error.

So, you can still find this piece online despite the fact that the grandiose claims of the title (Scientists Discover Space’s Largest Intergalactic Bridge, Solving A Huge Dark Matter Puzzle), and of the concluding paragraphs are completely wrong. Here is the triumphal conclusion:

…If this same type of structure that exists between Abell 0399 and Abell 0401 also exists between other colliding clusters, it could solve this minor anomaly of the Bullet cluster, leaving dark matter as the sole unchallenged explanation for the displacement of gravitational effects from the presence of normal matter.

It’s always an enormous step forward when we can identify a new phenomenon. But by combining theory, simulations, and the observations of other colliding galaxy clusters, we can push the needle forward when it comes to understanding our Universe as a whole. It’s another spectacular victory for dark matter, and another mystery of the Universe that might finally be solved by modern astrophysics. What a time to be alive.

So what’s wrong? Well immediately following the last paragraph is this subsequently added correction:

Correction: after a Twitter exchange with one of the study’s scientists, the author regrets to inform the reader that the acceleration imparted by the magnetic fields to the electrons along this intergalactic bridge is likely unrelated to the velocity anomaly of the Bullet cluster. Although both may be explained by hydrodynamic effects, the effects that cause this radio emission and the acceleration of electrons are unrelated to the measured high velocity of the Bullet cluster’s collisional elements and X-ray gas. Ethan Siegel regrets the error.

Oh. So this correction completely negates the claim that the observations as described, of the colliding galaxy clusters Abell 399 and Abell 401, somehow clear up a known problem that the Dark Matter model has with another pair of colliding galaxies known as the Bullet Cluster.

The proponents of Dark Matter like to cite the Bullet Cluster as strong evidence for DM, but their model cannot account for the high collisional velocity of the Bullet Cluster’s component galaxies. It was this problem that Siegel incorrectly interpreted the Abell cluster observations to have solved. So this was just another strained attempt to justify the failed Dark Matter hypothesis (by inference only, of course) and even by the low-bar of the modern scientific academy, it was an abject failure.

Which brings us to a more recent piece featuring another central conceit of Big Bang Cosmology that, like Dark Matter, has no empirical evidence to support it. This is the belief that there exists a thing called “space” that physically interacts with the matter and energy in the cosmos. There is absolutely no empirical evidence to support this belief. Despite this absence of evidence, the belief in a substantival “space” is widely held among among modern cosmologists.

It is a characteristic of modern theoretical physics, that a lack of empirical evidence for its existence, does not and cannot, diminish the claim to existence for an entity necessary to reconcile either one of the standard models with observed reality. In the modern scientific paradigm, a theoretical model is the determinant of physical reality.

The entire premise of this article then is, that something which cannot be demonstrated to exist must, nonetheless, have a specific physical characteristic in order for the modern conception of theoretical physics to be consistent with itself. Non-existent space must be continuous not discrete because otherwise modern theoretical physics doesn’t make sense.

But this is nothing more than an old whine encoded in a new frequency. It is not news that the modern conception of Relativity Theory is inconsistent with Quantum Theory. Modern theoretical physics, generally speaking, is inconsistent, with both empirical reality, and with itself. Both standard models make assertions about physical reality that cannot be empirically demonstrated to be physically meaningful.

The only evidence on offer for all the empirically baseless assertions of the two standard models is that they are necessary to make the models work. Once the models have been ad hoc corrected for any discrepancies with observed reality, the “success” of the models is touted as proof of their description of physical reality.

There is much assertive hand-waving going on in this article, as is typical of Siegel’s work. There is one particular assertion I’d like to single out for close examination:

In Einstein’s relativity, an observer who’s moving with respect to another observer will appear to have their lengths compressed and will appear to have their clocks run slow.

At first glance this seems a reasonably accurate statement, but it is, in fact a false assertion. “Einstein’s relativity” encompasses both the Special and General theories. The statement as presented, in which a moving observer “appears to have their their lengths compressed and… their clocks run slower”, applies only to the Special Relativity case involving two inertial frames. It absolutely does not apply to those conditions where General Relativity applies, i.e. non-inertial frames (see this previous post).

In the SR case, two inertial frames are moving with a constant velocity with respect to each other and each observer sees the other’s lengths compressed, and their clocks slowed. Neither observer actually has lengths compressed or clocks slowed; they only appear that way to the remote observer.

Under GR conditions, where a change of reference frame has taken place (via acceleration), there are physical consequences for the observer in the frame that has accelerated. In the accelerated frame, lengths are contracted in the direction of the acceleration, and clocks do run slower. There are physical consequences to changing frames.

Modern theorists like Siegel are, however, inclined to believe that the SR and GR cases can be conflated, whenever it is convenient to do so, based only on an erroneous and logically unjustified over-extension of the Principle of Equivalence. On the basis of empirical observations and sound theoretical considerations, however, it is possible to say, unequivocally, that they are wrong, both theoretically and empirically. And you can’t be any more wrong than that.

There is only one way out of this theoretical mess and that is to apply the principles set forth in the No, The Universe Is Not Purely Mathematical In Nature column to the two standard models. On the evidence, unfortunately, Professor Siegel does not practice what he preaches, nor do most of his cohorts in the theoretical physics community. Thus, the “crisis in physics”.

More on LIGO

The latest from the New York Times:

https://www.nytimes.com/2017/10/16/science/ligo-neutron-stars-collision.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=second-column-region&region=top-news&WT.nav=top-news

I have to admit that I find this type of pseudo-scientific puff piece conducive to depression. Nothing discussed here involves anything remotely to do with an actual observation. What is described instead, is a mathematical fiction, with such a slender and tenuous connection, to a dubious, claimed detection, it boggles the scientific imagination.

What we have here is a display, not of science, but of mathematicism, a disease of modern culture more reductive than reductionism. A large piece of 19th century machinery, polished and constructed, to spit out reams of data, destined to be massaged by elegant algorithms of deep complexity, in silent computers running ceaselessly, and laboring mightily to bring forth a shred of a signal, so flimsily beneath any possible sensitivity of the employed machinery, as to be laughable.

From this shredded evidentiary bean of dubious provenance, is spun a mighty mathematical beanstalk, a tale of fantastical proportions, with beasts of impossible construction, swirling about in a dance of destruction and laboring mightily to bring forth a shred of a signal, so flimsily beneath any possible sensitivity of the detecting machinery, as to be laughable.

This piece is nothing but a rewrite of a vapid press release, as substance free as a DJT tweet touting his magnificent intelligence. Science weeps, bound in a dungeon of mathematical formalisms.

[Comment submitted to NYT on 10/16/17. It may or may not get published.]

The LIGO Fiasco Rolls On

It has become clear now, after three so-called gravitational wave detections, that the LIGO enterprise is a wheezing exercise in mathematical fantasizing. It’s claims of a GW detection are without any real physical basis. LIGO is all math all the time with some updated 19th century technology appended to provide source data. That source date is then mathematically sculpted such that a quantum level signal detection is claimed which just happens to more or less agree with one out of a set of 250,000 precalculated ‘signal templates’.

Leave aside the question of whether it is physically possible for a large, classical scale, mechanical device to detect a quantum scale (10^{-20} m) displacement on an interferometer with two 4 km long arms. For now it is enough to point out that the more-or-less agreement between ‘signal’ and template is hardly sufficient scientific basis for the subsequent elaborate and exquisitely precise description of a binary black hole merger that supposedly took place in a distant, unobservable past.

Even allowing for the possibility of such a dubious physical detection there is no scientific justification for then running an inferential chain of logic back up a baroque mathematical model, filigreed as it is with numerous unsubstantiated assumptions about the nature of physical reality, as if there  no other possible source for such a quantum level ‘signal’. The claim is, of course, that all other possibilities have been excluded. On top of everything else we are thereby treated to a ludicrous mathematicist claim of quantum omniscience.

It is interesting to note that the problematic claim in the first GW detection paper (discussed here), of having received an intact signal from the entire black hole merger event, has been downplayed somewhat in the third paper. It is alluded to in a fashion that suggests a desire for the claim to be plausibly deniable as a mere misunderstanding at some future time when the issue is pressed.