Author Archives: EmpiricalWarrior

Adventures in Theoretical Physics II – Fun with General Relativity

Well here’s a cute little video that manages to do a good job of conveying just how daft and detached from reality theoretical physics has gotten over the last century:

The first 11 minutes or so are effectively a sales pitch for one of the structural elements of the Big Bang Model – Spacetime. The deal is, you’re supposed to believe that the force of gravity is not really there – nothing is holding you to the surface of the earth, rather the earth is accelerating upward and pushing against you.

And the reason this is happening is that you are not following a curved path in –Spacetime, because according to the video you are being knocked off of that curved path by the earth that is accelerating upwards and you are in the way and that’s gravity, tada! How do we know this? Well that’s obvious, it’s in the math and the math tells reality what’s going on and if reality doesn’t like it, that’s too bad. So don’t go trusting your lying eyes, alright.

In addition to Spacetime, this fairy tale is predicated on a ridiculous over-extension of the Principle of Equivalence that Einstein used in developing Special Relativity. Einstein was very clear that the POE applied only under the severely constrained circumstances of a thought experiment. His main purpose seems to have been to provide a physical interpretation for the observed equivalency between gravitational and inertial masses. Einstein presented the POE as informing his ideas about gravity.

The video ignores Einstein’s constraints and pretends the POE is fundamental to General Relativity, so it winds up insisting that things that are obviously not true in physical reality, are, nonetheless, true simply because the math can be framed that way – your lying eyes be damned.

We are told that a man falling off a roof is in the exact same situation as an observer in a non-accelerating rocket ship far from any gravitating body. This claim is made even though it is obviously not true; the falling man will be injured, if not killed, when he hits the ground, whereas no such fate will befall the observer in the rocket ship.

So the idea is, until the falling man meets his unfortunate fate, the situation is the same and therefore both situations are the same, the different outcomes not withstanding – because the math is the same. Observers free falling in orbit won’t be able to tell they’re not in an inertial frame – unless they look out the window, so that’s just like being in an inertial frame too. Right, of course.

In a similar vein, the video insists that an observer in a rocket accelerating at 9.8 m/s^2 will not be able to tell the difference between that situation and standing on the surface of the earth. The presenter fails to mention however, that only holds true as long as the observer doesn’t observe out the window, which will alert the observer that the rocket and therefore the observer are not at rest on the surface of a large gravitating body and therefore the situation is not comparable to standing at rest on the surface of the earth. Also, if any observer steps off the rocket, they will be left behind as the rocket accelerates away. But nevertheless, it’s all the same – as long as no one looks out the window, and maybe you remember that the earth is actually accelerating upwards under your feet, like the floor of the rocket. Sure, of course.

For the sake of introducing some sanity in this matter, here is Einstein on the POE. Note that the second paragraph completely contradicts the claims made in the video implying the equivalence of all inertial and non-inertial frames.

We must note carefully that the possibility of this mode of interpretation rests on the
fundamental property of the gravitational field of giving all bodies the same acceleration, or, what comes to the same thing, on the law of the equality of inertial and gravitational mass…

Now we might easily suppose that the existence of a gravitational field is always only an apparent one. We might also think that, regardless of the kind of gravitational field which may be present, we could always choose another reference-body such that no gravitational field exists with reference to it. This is by no means true for all gravitational fields, but only for those of quite special form. It is, for instance, impossible to choose a body of reference such that, as judged from it, the gravitational field of the earth (in its entirety) vanishes.

RELATIVITY THE SPECIAL AND GENERAL THEORIES, ALBERT EINSTEIN, authorized translation by Robert W. Lawson, original version 1916, translated 1920, appendices 3 and 4 added 1920, appendix 5 added to English translation 1954

It is clear from this statement that the POE of Einstein’s thought experiment is the Galilean version, commonly referred to nowadays as the “Weak” POE. The so-called “Einsteinian” and “Strong” POEs of modern cosmology are post-Einstein formulations attributed initially to Robert Dicke, though there were doubtless others who perpetrated and embellished this nonsense. Neither extension of the POE has anything to do with the foundations of Einstein’s Relativity Theory. It is those mid-20th century extensions that are misleadingly presented in the video as fundamental features of General Relativity.

The POE, in its current, extended usage, is mostly just a conjecture of mathematical convenience, allowing theorists to use Special Relativity math instead of the more difficult General Relativity formulations. It also results in a theoretical claim that the speed of light in a vacuum is a universal constant. That claim contradicts both GR which predicts that the speed of light varies with position in a gravitational field and observations which confirm that prediction.

This unwarranted belief that the speed of light is a universal constant has also produced a cottage industry of theorists expounding a theory of undetected structures called Black Holes with the physically absurd properties of an event horizon and a singularity. No such structures exist. The relativistic slowing of light in a gravitational field precludes their existence. It does not preclude the existence of massive high-density objects.

Ok, let’s grant that this video presentation is of dubious scientific quality and does not, perhaps, represent the consensus view of the scientific community, particularly with regard to the so-called Principle of Equivalence, although if not the consensus, the Strong POE certainly commands significant support by a majority of theoretical cosmologists . The usual suspects will whine, of course, that pop-science presentations like this video cannot be trusted.

That complaint is also lodged against anything written for a general audience, even when the author is a fully accredited scientist with a relevant FAS (full alphabet soup) after their name. If it’s written so non-experts can understand it, then it is, on some level, wrong.

The reason for this situation is straightforward: much of what theoretical physicists believe cannot be translated into clear, logical, statements of scientific fact. What you get instead is confident handwaving consisting of metaphysical assertions that have no factual basis in empirical reality and a lot of math. According to theorists this is because theoretical physics can only be properly understood by those steeped in years of study of the underlying mathematical esoterica that informs only the truly knowledgeable. To which the only proper retort is: math is not physics and if your math cannot be translated into empirically verifiable physical terms – then your math is inadequate to the task of being a proper scientific model of physical reality.

The modern POE is just a conjecture of mathematical convenience, nothing more. Nonetheless, this modern POE permeates and perverts the scientific literature. Here is an Encyclopedia of Britannica entry for the POE:

In the Newtonian form it asserts, in effect, that, within a windowless laboratory freely falling in a uniform gravitational field, experimenters would be unaware that the laboratory is in a state of nonuniform motion. All dynamical experiments yield the same results as obtained in an inertial state of uniform motion unaffected by gravity. This was confirmed to a high degree of precision by an experiment conducted by the Hungarian physicist Roland Eötvös. In Einstein’s version, the principle asserts that in free-fall the effect of gravity is totally abolished in all possible experiments and general relativity reduces to special relativity, as in the inertial state.

Britannica, The Editors of Encyclopaedia. “Equivalence principle”. Encyclopedia Britannica, 31 Mar. 2019, https://www.britannica.com/science/equivalence-principle. Accessed 6 June 2021.

It should be noted that, according to the encyclopedia’s referenced article on Roland Eötvös, his experiment “… resulted in proof that inertial mass and gravitational mass are equivalent…“, which is to say, that it demonstrated the Weak POE only. It is also clear, that the authors of this entry are confused about the distinctions between the three POEs. But what of that; it’s only an encyclopedia trying to make sense of the nonsensical world of the modern theoretical physicist and modern theoretical physics is an unscientific mess.

Adventures in Theoretical Physics I – Entropy

Updated 3Jun21

Entropy is one of those slippery concepts floating around in theoretical physics that defies easy explication. The following quote is from a small subsection of an extensive Wikipedia entry on entropy:

The following is a list of additional definitions of entropy from a collection of textbooks:

-a measure of energy dispersal at a specific temperature.

-a measure of disorder in the universe or of the availability of the energy in a system to do work.

-a measure of a system’s thermal energy per unit temperature that is unavailable for doing useful work.

In Boltzmann’s definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Consistent with the Boltzmann definition, the second law of thermodynamics needs to be re-worded as such that entropy increases over time, though the underlying principle remains the same.

Note that this is a “list of additional definitions”, there are seemingly innumerable others peppered throughout the article. It would be a hard to caricature the overall impression of incoherence, oozing like quicksand and sucking under any rational attempt to say concisely what, exactly, entropy is, beyond the usual handwaving about a vaguely specified measure of disorder.

The first sentence of the Wiki article states unequivocally that “Entropy is… a measurable physical property…”. That statement is false. In section 7.8 it is made clear that entropy cannot be measured directly as a property. Rather, discrete energy transfers are dropped into an equation that ostensibly “describes how entropy changes dS when a small amount of energy dQ is introduced into the system at a certain temperature T.” The only measurements made are of energy and temperature. Entropy is a made-up mathematical concoction; It is not a real property of physical reality.

The reader will also be pleased to note that the same article informs us that “In 1865, Clausius named the concept of “the differential of a quantity which depends on the configuration of the system,” entropy… “.

The real problem with the concept of entropy is not that it may be useful in doing some thermodynamic calculations. Clearly, it is of some mathematical value? What it does not have is any clear explanatory power or physical meaning, and well, math just isn’t physics. Nonetheless, entropy has been elevated to a universal, physical law – the Second Law of Thermodynamics, where the situation doesn’t get any less opaque. In fact the opacity doesn’t get any more opaque than this gem:

Nevertheless, this principle of Planck is not actually Planck’s preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy.

Section 2.9 of the 2nd Law wiki entry above

Here is Planck’s quote from “…a previous sub-section of the present section of this present article…”:

Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.

Of course, the only question that remains then is, what exactly is entropy?

The final word on the nature of entropy should be this comment by John von Neumann which appears in a sidebar of this section, wherein he advises Claude Shannon to call his information uncertainty concept entropy because “… In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.” Can’t argue with that logic. It appears that the definition of entropy is entropic. It is getting ever more disordered with the passage of time.

So what’s the problem? This is after all, a relatively trivial matter compared to the Grand Delusions of Theoretical Physics, like Dark Matter, Dark Energy, Quarks & etc. Or is it? Well, actually the Entropy Paradigm represents a major blind spot in fundamental physics. Theoretical physics has this misbegotten, laboratory based idea that all ordered systems are in some sense, “running down” or losing the ability to do work, or becoming more disordered or, or… something, called Entropy.

Entropy is all downhill, all the time. In the context of a designed laboratory experiment, the original, closed-system condition is some prearranged, ordered-state, that is allowed to “run down” or “become disordered” or whatever, over time and that, more or less is Entropy! Fine so far, but when you make Entropy a universal law – The Second Law of Thermodynamics, all is lost, or more exactly half of reality is lost.

Why? Because there are no closed systems outside the laboratory. What you get when you ignore that fact is this kind of bloated nonsense:

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations – then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. 

Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

Entropy is correct, if all it boils down to is, ordered, closed, systems will become disordered. But entropy doesn’t tell you how the systems came to be ordered in the first place. If you extrapolate from a closed-system experiment, a Universal Law that says, or implies, that all systems rundown, but you have no theory of how those systems came to be ordered in the first place, you have exactly half a picture of reality – because over here in the real world, ordered systems do arise from disorder.

It’s hard to do good physics if you have this half-blind view of reality. You start babbling about the Arrow of Time, the heat death of the Universe, not to mention whatever this is. You may have to invoke an unobservable Creation Miracle like the Big Bang to get the ball rolling.

Babble Update (3Jun21)

“If you want your clock to be more accurate, you’ve got to pay for it,” study co-author Natalia Ares, a physicist at the University of Oxford, told Live Science. “Every time we measure time, we are increasing the universe’s entropy.”

Livescience.com

Meanwhile, physical reality doesn’t work like that. Physical reality is locally cyclic – like life itself. There is, for instance, a considerable body of observational evidence suggesting that galaxies reproduce themselves. That body of evidence has been dismissed with spurious statistical arguments, and the well-established astronomer who made them, Halton Arp, was denied telescope time to continue making those observations. No one has since been permitted to pursue that line of observational research.

Meanwhile, there is also no comparable body of observational evidence for the Big Bang supported model of galaxy formation, via accretion, from a primordial, post-BB cloud. Like the rest of the Big Bang’s defining features, galaxy formation by accretion is not an observed phenomenon in the Cosmos.

So, an over-generalization (to the point of incoherence), of some simplistic closed-system laboratory experiments done in the 18th and 19th centuries, has come to aid and abet, the absurd Big Bang model of the Cosmos. Modern theoretical physics is an incoherent, unscientific mess.

Starts With a Bang, Ends With Nonsense

Ethan Siegel’s columns for Forbes magazine are tour de forces of the illogical circular reasoning that passes for scientific discourse in the modern theoretical physics community. They are best avoided out of respect for the good name of science. It is, however, sometimes instructive, not to mention eye-opening, to slog through one, in order to appreciate just how self-deluded the pseudo-science known as modern cosmology has become.

Case in point: this article from Dec 22, 2020. Therein, Siegel is at great pains to demonstrate that dark matter and dark energy are the only scientific way of accounting for observed cosmological phenomena in the context of the Standard Model of Cosmology, commonly known as the Big Bang model. This is true of course, the BB model can only account for cosmological observations if it is allowed to invoke the existence dark matter and dark energy – so therefore dark matter and dark energy must exist despite the absence of any empirical evidence for their existence. Reasoning can’t get any more circular than that.

In some of Siegel’s arguments there is almost no logical argument at all, just a recitation of certain facts followed by an assertion unsupported by the facts presented, that those factual observations were predictions rather than post-dictions of the model (emphasis added):

What’s remarkable is that, because the laws of physics that govern particles (and nuclear fusion) are so well-understood, we can compute exactly — assuming the Universe was once hotter, denser, and expanded and cooled from that state — what the different ratios of these different light elements ought to be. We’ve even studied the reactions in the lab directly, and things behave precisely as our theory predicts. The only factor we vary is the photon-to-baryon ratio, which tells us how many cosmic photons (particles of light) there are for every proton or neutron (the baryons) in our Universe.

We’ve now measured it all. Satellites like COBE, WMAP, and Planck have measured how many photons there are in the Universe: 411 per cubic centimeter of space. Intervening clouds of gas that appear between us and a distant light source, like a luminous galaxy or quasar, will absorb a fraction of the light as it travels through the Universe, teaching us the abundances of these elements and isotopes directly. When we add it all up, only ~5% of the total energy in the Universe can be normal matter: no more and no less.

What a mess! The central assumption (emphasized above) is simply the BB model, which is itself based on two foundational assumptions:

  1. The cosmos is a unitary, coherent, simultaneous entity – a “Universe”.
  2. The cosmological redshift is caused by some form of recessional velocity.

Neither of those assumptions has any empirical basis and both date to the early 20th century when the scale of the cosmos was barely grasped. They are however, sacrosanct among cosmologists. Those foundational assumptions are, by some common but unstated agreement, true by definition in the cosmological community. They are more than just assumptions, they are beliefs, and they are almost certainly wrong. But, what of that… Assuming our assumptions are correct we can use math to massage our model into agreement with observations – just like good old Ptolemy did way back around the dawn of the previous Dark Ages.

More recently Siegel has posted a similarly obtuse argument concerning the supposed successes of the Big Bang model and its unchallengeable status. It features the same kind of circular reasoning as above: Our model is correct as long as our assumptions and postulates are correct, therefore the things we assume and postulate are correct, because the model is correct. The result of this transparently illogical syllogism, which underlies all of the author’s triumphalist claims for the Big Bang model, is the nonsensical narrative that winds from inexplicable Big Bang to the now 95% invisible “Universe”.

The reason there are no significant challenges to the Big Bang orthodoxy from within the scientific community is that the community members have been trained to accept the 100 year old premises of the model. So, again, the author is correct to this extent: if you accept the model’s premises you are going to wind up with some ludicrous depiction of the Cosmos like LCDM. However, the simplistic assumptions of 100 years ago have proven a disastrous foundation for modern cosmology. The model depicts an inane, unscientific, “Universe” that does not at all resemble, in its defining features, the Cosmos we actually observe.

In the Cosmos we observe there is no Big Bang event, no inflation, no expanding spacetime, no dark matter, no dark energy. All of those things are essential elements of LCDM . None of those things are present in the Cosmos we observe. Physical reality does not look like the LCDM “Universe”. People like the author, who, nonetheless, believe the absurd LCDM story, do so because they are functionally, mathematicists.

Mathematicism is an ancient, half-baked philosophy whose proponents believe that mathematics underlies and determines the nature of physical reality. So, dark matter (& etc.) cannot be empirically demonstrated to exist? That is of no concern to a mathematicist; it all has to be there because a peculiar model (LCDM) requires it in order to reconcile itself with observations, and therefore dark matter (& etc.) have to exist. Pay no attention to the testimony of your lyin’ eyes (and telescopes)!

Mathematicism is junk philosophy. It is not science. How we got to this state of affairs is going to provide plenty of work for future historians of science. For now though, there is much to do just to pry the cold, dead hands of mathematicism from the throat of theoretical physics (which is now barely alive in any scientific sense).

Science rests on observation (detection) and measurement, not the febrile imaginings of reality-challenged, nerd-mathematicists with a cult-model to defend. What passes for science among the theoretical physics community of the academy these days is not science in any meaningful sense. The Big Bang model is Exhibit A for that proposition.

(Parts of this post were taken from a comment I made on Siegel’s Medium article.)

Junk Science in The New Yorker

So, an interesting and even-handed account of the recent acknowledgement by the US government of ongoing investigations into UFO phenomena contains this whopper:

Interstellar travel by living beings still seems like a wildly remote possibility, but physicists have known since the early nineteen-nineties that faster-than-light travel is possible in theory, and new research has brought this marginally closer to being achievable in practice.

This is not an accurate scientific assessment of the possibility of faster-than-light travel. What it appears to be referring to is the mathematicist fantasy of an Alcubierre warp-drive. You can find a succinct presentation of this “theory” here. It is a short and not technically difficult presentation which seems to reach a straightforward conclusion:

At present, such a thing just doesn’t seem to be entirely within the realm of possibility. And attempts to prove otherwise remain unsuccessful or inconclusive.

Things would have been fine if they left it at that but the nerd community, having had their minds warped by too many repeated viewings of Star Wars-Trek fantasy-fiction- movies, had to add a more hopeful note:

But as history has taught us, what is considered to be impossible changes over time. Someday, who knows what we might be able to accomplish?

Unfortunately history has also taught us that the human imagination is not a reliable guide to the nature of physical reality. The Alcubierre drive rests on the scientifically baseless presumption that spacetime is a causally interacting element of physical reality. There is no empirical evidence for such a presumption.

There is, however, a significant contingent of Academy Certified Scientists who believe in the existence of a physical spacetime. This belief is entirely based on the assumption, by the mid-20th century neo-Relativists (Misner, Thorne, Wheeler, et al), that such a spacetime exists. However, real science cannot be based on what MTW (or you or I) believe. Real science rests on empirical evidence, not on, in this case, the spurious reification of relational concepts.

Like Star Wars itself, the Alcubierre drive is a pseudo-scientific fantasy that amuses and confuses a certain reality-challenged subset of humanity, many of whom, unfortunately, are puking up this indigestible, hairball of irrationality in the halls of the Scientific Academy. The New Yorker is not to blame for this mess, but thoughtlessly repeating such unscientific nonsense, as if it had a scientific basis, is not helpful.

The Unprincipled Principle & The Persistence of Mistakes

Over at Triton Station, Stacy McGaugh has a series of posts on the recent, and ongoing, successful predictions of MOND at the galactic scale, beginning with the December 31, 2020 post, 25 Years A Heretic, which provides an excellent historical overview of the situation. It is thankfully, solidly couched in empirical observations.

The last two posts in the Comments section of the linked post are by Apass and Phillip Helbig. I had intended to reply to them in a timely fashion but by the time I got around to it the Comment section was closed. Following are the replies I intended to post:

@Apass,

… through what mechanism would this new disk model give rise to the EFE felt by a satellite galaxy isolated by tens of thousands of light years where you cannot invoke anymore viscosity-like effects?

Given that the mechanism through which gravity causes its effects is generally unknown, that’s an odd question. If a theoretical model of disk dynamics limning the mass distribution replicated MOND’s success in describing the rotational curves, I would be surprised if it didn’t also account for the EFE. After all, satellite galaxies are not, by definition, gravitationally isolated from their host galaxy.

The fact that DM didn’t predict the EFE is probably a function of the fact that DM doesn’t successfully predict anything except itself. It is always deployed, as a patch to a failure of an implementation, on significantly larger scales, of the standard gravitational model of the solar system. If the EFE holds up, which I expect it will, there will likely be a concerted effort to post-dict it with DM. This would present an existential problem to the SMoC proponents however, as the EFE overturns the unprincipled Strong Equivalence Principle.

The SEP is not an expression of empirical fact, the way the weak principle of equivalence is (the equality of inertial and gravitational mass); it is little more than an idle conjecture of mathematical convenience, adopted mid-twentieth century, right at the onset of the post-Einsteinian revision of General Relativity.

The SEP is not foundational to the SMoC, but it is well integrated into the mindset of its practitioners and has consequences for the theoretical “study” of theoretical entities like black holes and multibody systems like galaxy clusters. Dispensing with the SEP would, for one thing, mean that cosmologists could now stop believing, contrary to the empirical evidence and GR prediction, that the speed of light in a vacuum is a universal constant.

@Phillip Helbig

Note that in the standard cosmological model, only 1920s cosmology is used. Despite the huge amount of high-quality data now available, classical cosmology needs nothing beyond what was known about relativistic cosmology in the 1920s.

Well yes, and that’s why we need a 21st century cosmology with, shall we say, more realistic underpinnings. I really don’t understand why anyone would think clinging to the naive and dubious assumptions of 100 years ago is a scientifically sound approach, especially given the manifest failures of the current model. I understand that the SMoC is graded on the curve, with negative empirical results deeply discounted in favor of theoretical requirements – but still, the model’s defining structural elements all lie totally in the realm of the empirically undetected.

The fact that there appears to be absolutely no interest in revisiting the standard model’s foundational assumptions, even as an exercise in intellectual curiosity, says nothing good about the socio-economic incentives of the modern scientific academy. The Big Bang model is the best possible you say? I would agree, but only with the caveat that it is only so in the context of the model’s essentially pre-modern assumptions. As soon as those are jettisoned, the big bang creation myth will almost certainly become nothing more than a historical curiosity.

Nobel Prize in Physics Awarded to 3 Scientists for Work on Fairy Tale

It’s sadly, depressingly true. Some mathematicist nonsense that attends modern cosmological theory has been awarded science’s grandest prize:


This award has been given to mathematical work on a mathematical theory that has nothing to do with actual physics. Black holes, the ostensible subject matter of the work are fictitious entities that only arise in a mathematical models (such as Schwarzschild solution to the field equations of General Relativity) that ignore one of the fundamental consequences of General Relativity – the speed of light varies with position in a gravitational well.

Specifically, the steeper the gravitational well, the slower the speed of light. It follows directly from this, an observed fact, that gravitational collapse is a self-limiting event. In the Schwarzschild and similar solutions, the speed of light is held constant and results in a theoretical collapsed state (a black hole) that is physically nonsensical, containing as it does a singularity (a mass of zero-volume and therefore infinite density).

Modern theoretical cosmology is a mess, in which Einstein’s General Relativity has been dumbed-down in service of mathematical convenience. The result has been dumb-downed, mathematical models (the Big Bang, black holes), that present physically nonsensical accounts of observed reality. These models do not remotely resemble the physical reality we can actually observe and measure. They are mathematical fairy tales.

The Nobel Prize in Physics was just awarded for work on a dumbed-down mathematical model that has nothing to do with real physics.

A Brief History Of The Universe According To ΛCDM

Once upon a time, 13.8 billion years ago, the entirety of the ΛCDM model Universe was compressed into a volume smaller than a gnat’s ass (a statement of universal simultaneity forbidden by Relativity Theory).

The original condition of this model Universe is inexplicable in terms of known physics. Regardless, the model Universe erupted from its inexplicable condition, for an unknown reason, in a somewhat explicable cosmic flatulence event.

With the aid of an inflaton field the model Universe then expanded into today’s model Universe, which is 95% composed of things that are not observed in the Cosmos (Dark Matter and Dark Energy). The remaining 5% of the model Universe consists of the things that are observed in the Cosmos. 

This model should not be classified as science. It constitutes nothing more than the feverish imaginings of the mathematicist mindset that has utterly debased the field of cosmological studies.

A General Critique of The Standard Model of Cosmology (ΛCDM)

Foundational Assumptions

  1. The Cosmos is a singular, unified, simultaneous entity (a Universe) that can be modeled as such using the field equations of General Relativity. This assumption invokes the existence of a universal frame – the FLRW metric, which is antithetical to the foundational premise of Relativity Theory, that a universal frame does not exist. Solving the equations of GR for a universal frame is an oxymoronic exercise. The existence of a universal frame is impossible to empirically verify.
  2. The cause of the observed cosmological redshift is some form of recessional velocity. A recessional velocity produces a redshift, but not all redshifts are caused by recessional velocities. There is no empirical evidence supporting this assumption.

Consequent Supporting Hypotheses

  1. The model Universe had a singular origin – the Big Bang event. The Big Bang is not an observed component of the Cosmos. It is only a model dependent inference.
  2. Subsequent to the Big Bang event, the model Universe underwent an Inflation event driven by an inflaton field. Neither the Inflation event nor the inflaton field are observed components of the Cosmos. They constitute a model dependent hypothesis necessary to reconcile the model with observations.
  3. The Universe is expanding because spacetime is expanding and driving the galaxies apart. There is no evidence for the existence of a substantival spacetime, that is, for the existence of a spacetime that can interact causally with matter and energy. A substantival spacetime, one that can drive the galaxies apart, is not an observed component of the Cosmos.
  4. Dark Matter comprises @ 85% of the total matter content of the model Universe. The only salient characteristic of Dark Matter is that it reconciles the standard model with observations. Dark Matter is not an observed component of the Cosmos.
  5. Dark Energy comprises @ 69% of the matter-energy content of the model Universe. The only salient characteristic of Dark Energy is that it reconciles the standard model with observations. Dark Energy is not an observed component of the Cosmos

Space Cadet

I have to give Ethan Siegel credit for having fully absorbed the catechism of orthodox cosmology in all its illogical, not to mention, unscientific, glory. Here, for example is a recent Forbes post entitled How Does The Fabric Of Spacetime Expand Faster Than The Speed Of Light? It provides a good illustration of Siegel’s inability to distinguish between the factual, and the unfounded theoretical aspects, of the standard model of cosmology. But then, that’s the nature of the orthodox belief system known, somewhat sarcastically, as the Big Bang.

For instance, the article mentions the maximum light speed in a vacuum but fails to note that the maximum can only be achieved in an inertial frame. Inertial frames are only approximated in physical reality. That little qualification completely unravels this bit of received wisdom:

But light didn’t work that way; it always moves at the same speed through the vacuum of empty space, from every perspective imaginable.

That, in fact, represents a rather monumental failure of the modern cosmological imagination, forsaking as it does, both Einstein’s understanding of Relativity Theory and the observational evidence. Not to mention the fact that there is no such thing as “empty space“. But then, facts are optional in the post-empirical realm of theoretical cosmology.

Energy Loss And The Cosmological Redshift

Modern cosmology has an interesting approach to the question of where the energy lost by light goes as that light becomes redshifted over large cosmological distances. There seem to be several, not entirely coherent ways of approaching the question. Probably the most conventional approach is this, where Ethan Siegel seems to say, a “universal” expansion is causing the energy loss, but at the same time, the energy lost is driving the expansion of the “Universe” :

The photons have an energy, given by a wavelength, and as the Universe expands, that photon wavelength gets stretched. Sure, the photons are losing energy, but there is work being done on the Universe itself by everything with an outward, positive pressure inside of it!

It is all well and good, to claim that mathematically either view is correct, however, physically speaking, either claim is nonsense. One view posits a bounded, constrained, “Universe” against which the contents are doing work by pushing the boundary outward. There is no evidence for this peculiar viewpoint other than you can arrive at it by assuming, without question, certain things the standard model assumes – without question. It is a mathematicist argument with no basis in the physics of observed phenomena.

The other possibility from the mathematicist’s perspective, is that an otherwise imperceptible, but somehow substantive, spacetime is expanding and so stretching the wavelength of the photon. Again, this has no basis in observed phenomena; it is just an appeal to a purely theoretical construct, one which only serves to obscure whatever physics is actually taking place.

So, how can you account for the energy loss of a cosmologically redshifted photon without veering off into the metaphysical nerd-land presented by mathematicism? Simply put, you have to model light emitted from distant sources (galaxies on the cosmological scale) as consisting of expanding spherical wavefronts. Those wavefronts can be thought of as hyperspheres which are related to the mathematical-geometrical concept of a light cone.

The expanding spherical wavefront view of cosmological radiation rests only on the realistic assumption that galaxies radiate omnidirectionally. This is, in terms of modern cosmology, a simplifying assumption, one which eliminates the need for an expanding “Universe”. It is this expanding spherical wavefront phenomenon that has been misinterpreted in the standard model, to imply a “universal” expansion. The only things undergoing expansion in the Cosmos are the expanding spherical wavefronts of light emitted by galaxies.

The theoretical, light cone concept, closely parallels the physical structure of an expanding spherical wavefront considered 4-dimensionally. In the theory of light cones the surface of the light cone is the aggregate of all the hyperspheres which are themselves, sections of the light cone. All of the points on the light cone have no spatial or temporal separation. The 4-dimensional surface of the light cone constitutes a simultaneity – in 4-dimensions.

Deploying this mathematical model in the context of the observed expanding wavefronts suggests that when a 3-dimensional observer interacts with the light cone, at a specific 3-spatial + 1-temporal, dimensional location, the observer is observing the simultaneous state of the entire expanding wavefront, which, in the model, is the state of the hypersphere of the light cone that intersects with the observer’s 3+1-dimensional “now”.

A 3D observer cannot observe the entirety of the 4D light cone, only the portion of hypersphere (spherical wavefront) with which it directly interacts. However since all the points on the hypersphere are identical, information about the state of the hypersphere is available at the point of observation.

At large cosmological distances that wavefront will be observed to be redshifted, reflecting a net energy loss to the spherical wavefront. This energy loss is caused by the wavefront’s encounter with and partial absorption by, all the intervening galaxies (and other matter) encountered over the course of its expansion.

This energy loss can be crudely estimated using the standard General Relativity equation for gravitational redshifting. That, in turn, suggests the possibility that all observed gravitational effects are a consequence of the interaction between 3-dimensional matter and 4-dimensional electromagnetic radiation.

This post is based on a recent Quora answer.

Great Snipe Hunt Continues

You can always count on Dennis Overbye of the New York Times to run full-tilt with any Big Science (BS) press release that lands in his inbox. The latest example concerns yet another in the ongoing stream of “possible detections” in the unending snipe hunt semi-officially known as the Search for Dark Matter.

Dark matter has been experimentally sought for roughly 40 years; no convincing evidence for its existence has been uncovered in all that time. This latest bit of effusive hype (that emanates, according to Overbye from, …a multinational team of 163 scientists from 28 institutions and 11 countries…), continues the now long string of dark matter non-detections. Supposedly, the experiment under consideration was seeking a certain type of dark matter called WIMPs. The experiment failed in that regard, but hope springs eternal in the Dark Matter community.

In the elaborate and needless to say, expensive, detector, an excess of “events” was observed, which could be, it is now claimed, evidence of “axions“, or maybe another hypothetical, a “neutrino magnetic moment”, or sadly but more prosaically, the presence in undetectable amounts of tritium, a well-known (and observed) isotope of hydrogen.

It should be noted that according to Overbye:

Simulations and calculations suggested that random events should have produced about 232 such recoils over the course of a year.

But from February 2017 to February 2018, the detector recorded 285, an excess of 53 recoils.

This over-represents the excess. The original paper has the expectation value as 232 +/- 15 meaning the excess events only number 38. The 285 total does not constitute direct evidence of anything but the “events” themselves. They constitute the entire data set from a year of observations on a one-off, purpose-built system that has failed in its intended purpose.

So we have a paucity of irreplicable evidence being repurposed to justify some spare theoretical fantasies that just happen be lying around unused in in the back corner of some dusty theoretical broom closet. On the other hand, “…tritium contamination will just be one more detail that has to be considered or calibrated in future detectors“. Another day, another retrenchment.

The overall effort here is of a piece with the Higgs boson “detection” at the Large Hadron Collider and the gravitational wave “detections” at LIGO. It is nothing but an exercise in data manufacturing and manipulation as substitute for empirical science. It is the essence of BS, and this science by committee approach continues to make a mess of modern physics.

As to the perpetual snipe hunt, there is one clear message being delivered to the BS community by physical reality. It was as true 40 years ago as it is today: That Dark Matter shit ain’t here! When this simple fact is going to penetrate the thick wall of mathematicism shielding theoretical physicists from reality, however, is a question that only future historians will be able to answer.

The Big Bang Hustle

(Correction appended 2020-06-21)

Comes before us then, the cosmologist duo of Luke A. Barnes and Geraint F. Lewis, a couple of Aussies, with a Universe they want to sell you. The pitch, as presented in this promo for their new book, is that the Big Bang explains certain observations that cannot be explained by any competing model and any competing model must explain them. The argument rests on three points that aren’t so much about observations, but in two cases are merely superficial claims to explaining things that are essentially self-explanatory in terms of known physics, without invoking the big bang. The third point rests on the unjustified contention that the big bang interpretation is the only acceptable, existing explanation for the cosmological redshift.

The three points discussed in the video and their fundamental weaknesses are:

Olber’s Paradox – Olber’s paradox is inherently resolved by the cosmological redshift and this resolution does not depend on the Big Bang assumption that the redshift is a consequence of some form of recessional velocity. The cosmological redshift exists and if the cosmos is of sufficient size there will be a distance beyond which light from a galaxy is redshifted to a null energy state or, more likely, will be reduced to a minimum state where it will pool with all other such distant sources to form a microwave “background” from the point of view of a local observer.

The Cosmological Redshift – That the redshift is caused by some form of recessional velocity, from which cosmologists reason backwards to the Big Bang event, is simply an assumption of the standard model. If the assumption is rather that what expands are the spherical wavefronts of light being emitted by galaxies, you get a picture that is consistent with observations and which can be treated with a standard General Relativity formalism to yield a cosmological redshift-distance relationship – no big bang, with its ludicrously inexplicable original condition, required.

Nucleosynthesis – The cosmic abundances of the elements are what they are. BB theorists have simply imported nuclear theory and fit it to their model. Model fitting is a technique as old as Ptolemy. As the linked article makes clear there are many possible pathways to nucleosynthesis. The BB version also doesn’t even work that well.

Don’t know much about dark matter and dark energy… Barnes and Lewis kind of joke about this, which is appropriate I guess, since according to the standard model, 95% of the model’s Universe is composed of this unobserved and, by their account, poorly understood stuff. Which brings us to one of the central conceits of the video and, presumably, the book it is promoting, which is that the standard model “explains” satisfactorily, all of our cosmological observations.

The problem with the standard model’s “explanations” for observed phenomena is precisely that they are not scientifically satisfactory. They are not satisfactory because “explanations” such as dark matter, and dark energy are not empirically resolvable. Dark matter and dark energy are necessary to the standard model, in order to fit the model to observations. Physical reality, however, has no use for either, in as much as, no evidence for their physical existence can be found beyond this urgent need to somehow fit the standard model to observations.

By invoking, as necessary components of physical reality, entities for which there is no observational evidence, modern cosmology has descended into a pre-Enlightenment medieval mind-set. Dark matter and dark energy are just the modern, secular version of angels and devils dancing on the head of a pin. Their scientific, “explanatory” power in nil.

The video also features such an egregious scientific error as to be almost breathtaking. At the 5:50 minute mark, in an attempt to justify their claim that only the expanding universe interpretation of redshift accounts for Olber’s Paradox, Barnes makes the completely fallacious argument that the more distant a luminous object is, its apparent size diminishes, but not its apparent brightness. That is simply wrong.

Even the most perfunctory treatment of light theory would include a discussion of the luminosity-distance relationship. That someone with a Phd (I presume in science), would choose to ignore basic scientific knowledge in order to advance a spurious claim about the explanatory power of the standard model, says all you need to know about the state of modern cosmology. It’s a mess.

Correction (2020-06-21): In a conversation with Louis Marmet over at A Cosmology Group, he pointed out that technically Barnes was correct; he was speaking of surface brightness, which does not change with distance for an extended object. Stars, however, are not resolvable as extended objects by the human eye, only by the largest telescopes. The apparent magnitude of point-like sources, such as stars varies according to the luminosity-distance relationship. Barnes’ argument remains false – on its own terms.

Forbes’ Physics Follies & The Broken Clock Syndrome

Sure enough, it had to happen. Ethan Siegel finally wrote a column for Forbes that isn’t just a thoughtless regurgitation of, and apologia for, the inanities of modern theoretical physics. In fact, it’s a column I can wholeheartedly endorse. It should be required reading for all physics students. A copy should be handed out with every science diploma. Here is the key takeaway:

Mathematics wasn’t at the root of the physical laws governing nature; it was a tool that described how the physical laws of nature manifested themselves. The key advance that happened is that science needed to be based in observables and measurables, and that any theory needed to confront itself with those notions. Without it, progress would be impossible.

Those are fine scientific sentiments, indeed! Unfortunately, there is no evidence of these noble precepts being applied in any of Siegel’s now numerous scientific writings for Forbes, or at least not in any I have read. There we find only the unthinking mathematicism that has turned modern theoretical physics into a caricature of bad science.

There is no more reliable purveyor of the modern scientific orthodoxy than Ethan Siegel. His Forbes column, Starts With A Bang, with the exception noted above, relentlessly flogs theoretical assertions that flagrantly violate the principles quoted above. Even in cases where he gets the orthodoxy completely wrong, he and his editors simply plow ahead, barely acknowledging an error.

So, you can still find this piece online despite the fact that the grandiose claims of the title (Scientists Discover Space’s Largest Intergalactic Bridge, Solving A Huge Dark Matter Puzzle), and of the concluding paragraphs are completely wrong. Here is the triumphal conclusion:

…If this same type of structure that exists between Abell 0399 and Abell 0401 also exists between other colliding clusters, it could solve this minor anomaly of the Bullet cluster, leaving dark matter as the sole unchallenged explanation for the displacement of gravitational effects from the presence of normal matter.

It’s always an enormous step forward when we can identify a new phenomenon. But by combining theory, simulations, and the observations of other colliding galaxy clusters, we can push the needle forward when it comes to understanding our Universe as a whole. It’s another spectacular victory for dark matter, and another mystery of the Universe that might finally be solved by modern astrophysics. What a time to be alive.

So what’s wrong? Well immediately following the last paragraph is this subsequently added correction:

Correction: after a Twitter exchange with one of the study’s scientists, the author regrets to inform the reader that the acceleration imparted by the magnetic fields to the electrons along this intergalactic bridge is likely unrelated to the velocity anomaly of the Bullet cluster. Although both may be explained by hydrodynamic effects, the effects that cause this radio emission and the acceleration of electrons are unrelated to the measured high velocity of the Bullet cluster’s collisional elements and X-ray gas. Ethan Siegel regrets the error.

Oh. So this correction completely negates the claim that the observations as described, of the colliding galaxy clusters Abell 399 and Abell 401, somehow clear up a known problem that the Dark Matter model has with another pair of colliding galaxies known as the Bullet Cluster.

The proponents of Dark Matter like to cite the Bullet Cluster as strong evidence for DM, but their model cannot account for the high collisional velocity of the Bullet Cluster’s component galaxies. It was this problem that Siegel incorrectly interpreted the Abell cluster observations to have solved. So this was just another strained attempt to justify the failed Dark Matter hypothesis (by inference only, of course) and even by the low-bar of the modern scientific academy, it was an abject failure.

Which brings us to a more recent piece featuring another central conceit of Big Bang Cosmology that, like Dark Matter, has no empirical evidence to support it. This is the belief that there exists a thing called “space” that physically interacts with the matter and energy in the cosmos. There is absolutely no empirical evidence to support this belief. Despite this absence of evidence, the belief in a substantival “space” is widely held among among modern cosmologists.

It is a characteristic of modern theoretical physics, that a lack of empirical evidence for its existence, does not and cannot, diminish the claim to existence for an entity necessary to reconcile either one of the standard models with observed reality. In the modern scientific paradigm, a theoretical model is the determinant of physical reality.

The entire premise of this article then is, that something which cannot be demonstrated to exist must, nonetheless, have a specific physical characteristic in order for the modern conception of theoretical physics to be consistent with itself. Non-existent space must be continuous not discrete because otherwise modern theoretical physics doesn’t make sense.

But this is nothing more than an old whine encoded in a new frequency. It is not news that the modern conception of Relativity Theory is inconsistent with Quantum Theory. Modern theoretical physics, generally speaking, is inconsistent, with both empirical reality, and with itself. Both standard models make assertions about physical reality that cannot be empirically demonstrated to be physically meaningful.

The only evidence on offer for all the empirically baseless assertions of the two standard models is that they are necessary to make the models work. Once the models have been ad hoc corrected for any discrepancies with observed reality, the “success” of the models is touted as proof of their description of physical reality.

There is much assertive hand-waving going on in this article, as is typical of Siegel’s work. There is one particular assertion I’d like to single out for close examination:

In Einstein’s relativity, an observer who’s moving with respect to another observer will appear to have their lengths compressed and will appear to have their clocks run slow.

At first glance this seems a reasonably accurate statement, but it is, in fact a false assertion. “Einstein’s relativity” encompasses both the Special and General theories. The statement as presented, in which a moving observer “appears to have their their lengths compressed and… their clocks run slower”, applies only to the Special Relativity case involving two inertial frames. It absolutely does not apply to those conditions where General Relativity applies, i.e. non-inertial frames (see this previous post).

In the SR case, two inertial frames are moving with a constant velocity with respect to each other and each observer sees the other’s lengths compressed, and their clocks slowed. Neither observer actually has lengths compressed or clocks slowed; they only appear that way to the remote observer.

Under GR conditions, where a change of reference frame has taken place (via acceleration), there are physical consequences for the observer in the frame that has accelerated. In the accelerated frame, lengths are contracted in the direction of the acceleration, and clocks do run slower. There are physical consequences to changing frames.

Modern theorists like Siegel are, however, inclined to believe that the SR and GR cases can be conflated, whenever it is convenient to do so, based only on an erroneous and logically unjustified over-extension of the Principle of Equivalence. On the basis of empirical observations and sound theoretical considerations, however, it is possible to say, unequivocally, that they are wrong, both theoretically and empirically. And you can’t be any more wrong than that.

There is only one way out of this theoretical mess and that is to apply the principles set forth in the No, The Universe Is Not Purely Mathematical In Nature column to the two standard models. On the evidence, unfortunately, Professor Siegel does not practice what he preaches, nor do most of his cohorts in the theoretical physics community. Thus, the “crisis in physics”.

On the Redshift-Distance Relationship

The quote below is from a comment by @Apass on Stacy McGaugh’s blog, Triton Station. Stacy suggested we continue the conversation elsewhere. The quote comes from the comment section on this blogpost. The complete thread to this point can be found there. My response follows the quote.

@budrap …
Friedmann is irrelevant for the discussion we have. It is true that Friedmann was the first to derive the solution for the the expanding Universe, but It was Lemaitre who proposed what he later called the “primeval atom” – i.e. the first idea / model for BB.
And let’s not confuse the model’s assumptions – he assumed that the Universe is expanding (after all, all galaxies seem to recess), but this single assumption does not constrain how the galaxies at distance should move away from us. They might be recessing at a lower pace, they might be recessing at a constant speed or they might even be not recessing at all in the very far distance. At the time when the observations that showed the nebulae redshifts were made, there was no identified correlation between redshift and distance so all those possibilities were open. Only when you add to the model also GR, the correlation (as later observed by Hubble) can be derived – and he did so, before Hubble.
To me, that counts as a prediction confirmed later by empirical observations.
As for Hubble – again, it’s irrelevant if he accepted or not the interpretation of the redsifts. That was his opinion to which he was fully entitled.
As for the bias – yes, any scientific model should be based solidly on verifiable statements, but I’m not that easily going to throw the baby with the bathwater.
In case you must discount some verifiable observations that appear to you or to anyone else to not conform with the model, you should give a thorough justification of why those observations are not relevant. And if you cannot provide this solid justification you’ll need to start questioning, at first, your assumptions / understanding of the model (and here is a very big bias filter) – maybe those observations really fit in there but you don’t see how.
And if this still doesn’t allow room for the observations, then you’ll need to question the model. Don’t throw it right away, but look at what is salvageable, if something, in the light of what can be empirically observed or tested (that is, don’t introduce exotic, non-baryonic dark matter and call the day).
And when / where the model doesn’t provide sufficient explanatory power, use “as if” instead of definitive statements.

Apass,

So, you admit the assumption, which is all I’ve claimed – that the recessional velocity interpretation is an assumption. I guess your argument is that because that assumption can produce an “expanding universe” model which predicts a redshift-distance relationship in agreement with observations, the matter is settled.

It is not, because you can get the same result with the counter assumption – that the redshift-distance relation is not caused by a recessional velocity but is simply a consequence of the loss of energy by an expanding spherical wavefront of light as it traverses the cosmos. To calculate it, all you need do is apply the GR equation for gravitationl redshift to the expanding sphere over significant cosmological intervals, incorporating all the enclosed mass at each iteration. You can do it on a spreadsheet. You get a redshift-distance relation.

To this comment: “… yes, any scientific model should be based solidly on verifiable statements, but I’m not that easily going to throw the baby with the bathwater“, I can only ask, what exactly constitutes the “baby” in your opinion? To me, the BB model is all bathwater, there is nothing salvageable. The situation is exactly analogous to geocentric cosmology; the foundational assumptions are simply are wrong. The model presents a completely inaccurate, not to mention unrealistic account of observed reality.

The Speed Of Light Is Not A Universal Constant

One of the great unforced errors of late 20th century theoretical physics was to declare the speed of light in a vacuum, a universal constant. This was done despite the fact that, according to General Relativity, the speed of light varies with position in a gravitational field. This variation in light speed has been observed.

The idea that the speed of light varies with position in a gravitational field is not new. It comes on good authority:

…according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place when the velocity of propagation of light varies with position. Now we might think that as a consequence of this, the special theory of relativity and with it the whole theory of relativity would be laid in the dust. But in reality this is not the case. We can only conclude that the special theory of relativity cannot claim an unlimited domain of validity: its results hold only so long as we are able to disregard the influences of gravitational fields on the phenomena (e.g. of light).

Albert Einstein, Relativity The Special And The General Theory, 15th edition

The claim of those who wish to maintain the universal constancy of light speed (c) is that it is justified by the Equivalence Principle. The rather lengthy linked discussion goes into some detail but no mention is made of the constancy of c being consequent on the EP. None of the tests of the principle cited in the article involve measuring the value of c directly.

Einstein invoked the EP in the derivation of General Relativity to provide an interpretation of the observed fact that inertial mass and gravitational mass are equivalent. Given the above quote, he obviously did not find justification therein for asserting the universal constancy of c.

The only reasonable conclusion is that the EP does not justify the claim of a universally constant light speed. The claim appears to be only a lazy mathematicist assumption of mathematical convenience. Theoretical physics is a remarkably irrational, unscientific mess because of this sort of nonsense.

More on LIGO

The latest from the New York Times:

https://www.nytimes.com/2017/10/16/science/ligo-neutron-stars-collision.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=second-column-region&region=top-news&WT.nav=top-news

I have to admit that I find this type of pseudo-scientific puff piece conducive to depression. Nothing discussed here involves anything remotely to do with an actual observation. What is described instead, is a mathematical fiction, with such a slender and tenuous connection, to a dubious, claimed detection, it boggles the scientific imagination.

What we have here is a display, not of science, but of mathematicism, a disease of modern culture more reductive than reductionism. A large piece of 19th century machinery, polished and constructed, to spit out reams of data, destined to be massaged by elegant algorithms of deep complexity, in silent computers running ceaselessly, and laboring mightily to bring forth a shred of a signal, so flimsily beneath any possible sensitivity of the employed machinery, as to be laughable.

From this shredded evidentiary bean of dubious provenance, is spun a mighty mathematical beanstalk, a tale of fantastical proportions, with beasts of impossible construction, swirling about in a dance of destruction and laboring mightily to bring forth a shred of a signal, so flimsily beneath any possible sensitivity of the detecting machinery, as to be laughable.

This piece is nothing but a rewrite of a vapid press release, as substance free as a DJT tweet touting his magnificent intelligence. Science weeps, bound in a dungeon of mathematical formalisms.

[Comment submitted to NYT on 10/16/17. It may or may not get published.]

The LIGO Fiasco Rolls On

It has become clear now, after three so-called gravitational wave detections, that the LIGO enterprise is a wheezing exercise in mathematical fantasizing. It’s claims of a GW detection are without any real physical basis. LIGO is all math all the time with some updated 19th century technology appended to provide source data. That source date is then mathematically sculpted such that a quantum level signal detection is claimed which just happens to more or less agree with one out of a set of 250,000 precalculated ‘signal templates’.

Leave aside the question of whether it is physically possible for a large, classical scale, mechanical device to detect a quantum scale (10^{-20} m) displacement on an interferometer with two 4 km long arms. For now it is enough to point out that the more-or-less agreement between ‘signal’ and template is hardly sufficient scientific basis for the subsequent elaborate and exquisitely precise description of a binary black hole merger that supposedly took place in a distant, unobservable past.

Even allowing for the possibility of such a dubious physical detection there is no scientific justification for then running an inferential chain of logic back up a baroque mathematical model, filigreed as it is with numerous unsubstantiated assumptions about the nature of physical reality, as if there  no other possible source for such a quantum level ‘signal’. The claim is, of course, that all other possibilities have been excluded. On top of everything else we are thereby treated to a ludicrous mathematicist claim of quantum omniscience.

It is interesting to note that the problematic claim in the first GW detection paper (discussed here), of having received an intact signal from the entire black hole merger event, has been downplayed somewhat in the third paper. It is alluded to in a fashion that suggests a desire for the claim to be plausibly deniable as a mere misunderstanding at some future time when the issue is pressed.