The Unprincipled Principle & The Persistence of Mistakes

Over at Triton Station, Stacy McGaugh has a series of posts on the recent, and ongoing, successful predictions of MOND at the galactic scale, beginning with the December 31, 2020 post, 25 Years A Heretic, which provides an excellent historical overview of the situation. It is thankfully, solidly couched in empirical observations.

The last two posts in the Comments section of the linked post are by Apass and Phillip Helbig. I had intended to reply to them in a timely fashion but by the time I got around to it the Comment section was closed. Following are the replies I intended to post:

@Apass,

… through what mechanism would this new disk model give rise to the EFE felt by a satellite galaxy isolated by tens of thousands of light years where you cannot invoke anymore viscosity-like effects?

Given that the mechanism through which gravity causes its effects is generally unknown, that’s an odd question. If a theoretical model of disk dynamics limning the mass distribution replicated MOND’s success in describing the rotational curves, I would be surprised if it didn’t also account for the EFE. After all, satellite galaxies are not, by definition, gravitationally isolated from their host galaxy.

The fact that DM didn’t predict the EFE is probably a function of the fact that DM doesn’t successfully predict anything except itself. It is always deployed, as a patch to a failure of an implementation, on significantly larger scales, of the standard gravitational model of the solar system. If the EFE holds up, which I expect it will, there will likely be a concerted effort to post-dict it with DM. This would present an existential problem to the SMoC proponents however, as the EFE overturns the unprincipled Strong Equivalence Principle.

The SEP is not an expression of empirical fact, the way the weak principle of equivalence is (the equality of inertial and gravitational mass); it is little more than an idle conjecture of mathematical convenience, adopted mid-twentieth century, right at the onset of the post-Einsteinian revision of General Relativity.

The SEP is not foundational to the SMoC, but it is well integrated into the mindset of its practitioners and has consequences for the theoretical “study” of theoretical entities like black holes and multibody systems like galaxy clusters. Dispensing with the SEP would, for one thing, mean that cosmologists could now stop believing, contrary to the empirical evidence and GR prediction, that the speed of light in a vacuum is a universal constant.

@Phillip Helbig

Note that in the standard cosmological model, only 1920s cosmology is used. Despite the huge amount of high-quality data now available, classical cosmology needs nothing beyond what was known about relativistic cosmology in the 1920s.

Well yes, and that’s why we need a 21st century cosmology with, shall we say, more realistic underpinnings. I really don’t understand why anyone would think clinging to the naive and dubious assumptions of 100 years ago is a scientifically sound approach, especially given the manifest failures of the current model. I understand that the SMoC is graded on the curve, with negative empirical results deeply discounted in favor of theoretical requirements – but still, the model’s defining structural elements all lie totally in the realm of the empirically undetected.

The fact that there appears to be absolutely no interest in revisiting the standard model’s foundational assumptions, even as an exercise in intellectual curiosity, says nothing good about the socio-economic incentives of the modern scientific academy. The Big Bang model is the best possible you say? I would agree, but only with the caveat that it is only so in the context of the model’s essentially pre-modern assumptions. As soon as those are jettisoned, the big bang creation myth will almost certainly become nothing more than a historical curiosity.

6 thoughts on “The Unprincipled Principle & The Persistence of Mistakes

  1. Jeremy Thomas

    Any theory always have a “limited range” of applicability, and limited range here means limited “complexity” range.

    It have been shown mathematically that “complexity” is a source of irreducibility/incompleteness; complexity associated with the number of discrete components always introduce new irreducible properties that are independent of the “fundamental” laws followed by the discrete/elementary components.

    And this limited complexity range of applicability is obvious in all known actionable Physical theories:

    – The classical World is emergent from a large assembly of quantum objects; Identity: the ability to differentiate “objects” is a classical property at the root of any measurement, without Identity measurements are not possible; not even Mathematics is possible.

    – The complexity of living beings can’t be reduced to Quantum Mechanics not even in principle, a “Universal Wave function” is utter nonsense as the Universe is not a quantum object; in the same token Michio Kaku’s “God Equation” is pure gibberish.

    – Weather, climate models intrinsic limitations are also another example of the emergence of dynamic irreducible properties in complex systems.

    – The “anomalous” “rigid” rotational speed of galaxies as a complex system of stars shows also the emergence of properties of complex systems at cosmic distances; General Relativity failure in “explaining” this “forced” a prevalent reductionist mindset to postulate a fictional non observable “dark matter” that only have made matters worse.

    In the words of Philip W. Anderson: More is different, or perhaps Hegel’s dialectic of transition from quantity to quality.

    Reply
    1. EmpiricalWarrior Post author

      Jeremy,

      I generally agree with you as far as issue of complexity (or scale, as I call it) goes. However “emergence” is just a popular buzzword used by theorists to cover up their lack of ability to engage realistically with physical reality. It can be readily deployed to “explain” things without actually explaining anything.

      For instance when you say the rotation curves of galaxies “shows also the emergence of properties of complex systems at cosmic distances” you aren’t saying anything meaningful about the physics of actual galaxies. General Relativity is not the problem wrt the rotation curves of galaxies, it is the inability of theorists to devise an implementation of GR’s fundamental principles, appropriate to the scale and complexity of the distributed mass systems that typify galaxies. Instead they have invoked the “Keplerian” or “Newtonian” method that reduce the calculations (to a good approximation in the solar system) to a series of two body problems. Those methods simply do not work on galaxies because galactic dynamics cannot be approximated as a two-body problem with a large central mass.

      As a scientific concept “emergence” is a vapid truism – ultimately everything emerges in some sense! Where you say “The classical World is emergent from a large assembly of quantum objects” you could just as readily have said the quantum world emerges at the margins of the classical world. The concept of emergence, as it is commonly used, also suggests a temporal progression from less to more complex, which isn’t necessarily how physical reality works. In physical reality we do see the complex emerge from the simple (life from inert matter), but we also see the reverse, simple things emerge from more complex systems (photons from the sun). But classical physics doesn’t emerge from quantum physics or the other way around – they are coexistent, interrelated aspects of physical reality, with the different scales having different behaviors.

      Reply
      1. Jeremy Thomas

        You are confusing “scale” with “complexity range”.
        It have been shown already in formal mathematics that complexity is a source of incompleteness/irreducibility; even more that incompleteness/irreducibility is pervasive in the same sense that irrational numbers are pervasive in the set of Real numbers.

        When a property is “irreducible” it means that it can’t be “explained” or “reduced” using a set of independent assumptions/axioms.
        The “rigid” rotational speed of galaxies could be one of these properties that are irreducible, or “strong emerging”; you can’t “explain” it by using the gravitational laws followed by simple systems of stars or planetary systems; that rotational speed is a property of the system of stars as a whole; in the same way that self awareness is a property a living being and not explainable by the properties of its quantum elementary components.
        Naive Reductionism is intrinsically flawed.

        Reply
        1. EmpiricalWarrior Post author

          It seems to me that you are confusing math with physics here. Whatever formal mathematics has to say about the mathematical concepts of complexity and incompleteness/irreducibility, does not necessarily have meaning for or application to, any given physical system under consideration. In terms of galactic rotation curves you introduce the complexity argument as a hypothetical:

          The “rigid” rotational speed of galaxies could be one of these properties that are irreducible, or “strong emerging”

          You then pivot to an unqualified assertion:

          you can’t “explain” it by using the gravitational laws followed by simple systems of stars or planetary systems

          Well no, you can’t correctly calculate the rotation curves using the calculational techniques employed for the solar system; that is simply the fact of the matter. It does not follow, however, that a full implementation of GR principles to address the scale and complexity of a galactic system cannot be found, simply because of an abstract mathematical hypothetical.

          It may well be that in addition to gravitational considerations, other factors, such as the role of galactic plasma, need to be incorporated into the dynamics of galactic system models, but to suggest that gravity itself plays no role because of “complexity”, without suggesting any alternative mechanism to replace the gravitational effect, renders your argument implausible. Physical behaviors have physical causes; mathematics OTOH, does not have a causal relationship with physics.

          Reply
          1. Jeremy Thomas

            At the root of any physical argument is the axiomatic method, as explicitly used first by Newton following as guidance Euclidean Geometry.
            So if the axiomatic method is intrinsically limited then Physics blind use of the axiomatic method disregarding its limitations is inconsistent at best.
            Complex systems very frequently exhibit properties/behaviors that are a property of the system.
            Reductionists mindsets assume that knowing the “fundamental” laws of a complex system elementary components is enough to “reduce” or explain any property or behavior of any assembly of these elementary components; but that is implicitly assuming that the underlying axiomatic system is “complete” and that is a very strong assumption considering all relatively recent results in axiomatic systems.
            Again a previous example:
            – You can’t distinguish or uniquely identify quantum objects of the same “type”: an electron is indistinguishable from any other electron; in other words: Identity is meaningless for quantum objects.
            But now a sufficient big assembly of quantum objects give rise to a “classic” object and classic objects can be identified, the identity of a classic object is a property of the whole assembly of quantum objects and this property can’t be “explained” by the properties of the classic object quantum elementary components.
            The classic world with its space, measurements, and us is an emergent limit of the quantum world.

          2. EmpiricalWarrior Post author

            At the root of any scientific argument lie observations and measurements, to which logical analysis is applied in an attempt to form an accurate understanding of the nature and cause(s) of the observed and measured phenomena. Mathematics is a branch of logic that provides science with a fundamental and invaluable modeling tool. But mathematics is not science. The axiomatic method is appropriate to mathematics, but it has no place in science; nothing can be definitionally true in science.

            If your complaint is that the axiomatic method is being inappropriately deployed in the scientific academy, especially in the context of the two standard models, I would agree. But the misuse of the axiomatic method is only a symptom of the larger problem of mathematicism (the unscientific belief that math underlies and determines the nature of physical reality).

            Although mathematicist beliefs are openly professed by only a minority, mathematicism is the default operating paradigm of the theoretical physics community. The observations of astronomers and astrophysicists are fit into their archaic creation myth (based on dubious century old assumptions that have been axiomized) through the miracle of free-parameterization. The resulting standard model of cosmology is both ludicrous and absurd, depicting a Universe that bears no resemblance to the Cosmos we observe.

            Regarding:

            The classic world with its space, measurements, and us is an emergent limit of the quantum world.

            I can just as readily assert that the quantum world with its wavefunctions, superpositions, and etc, are an emergent limit of the classical world. Doesn’t matter; scientifically speaking neither statement says anything particularly meaningful.

Leave a Reply

Your email address will not be published. Required fields are marked *