On the Redshift-Distance Relationship

The quote below is from a comment by @Apass on Stacy McGaugh’s blog, Triton Station. Stacy suggested we continue the conversation elsewhere. The quote comes from the comment section on this blogpost. The complete thread to this point can be found there. My response follows the quote.

@budrap …
Friedmann is irrelevant for the discussion we have. It is true that Friedmann was the first to derive the solution for the the expanding Universe, but It was Lemaitre who proposed what he later called the “primeval atom” – i.e. the first idea / model for BB.
And let’s not confuse the model’s assumptions – he assumed that the Universe is expanding (after all, all galaxies seem to recess), but this single assumption does not constrain how the galaxies at distance should move away from us. They might be recessing at a lower pace, they might be recessing at a constant speed or they might even be not recessing at all in the very far distance. At the time when the observations that showed the nebulae redshifts were made, there was no identified correlation between redshift and distance so all those possibilities were open. Only when you add to the model also GR, the correlation (as later observed by Hubble) can be derived – and he did so, before Hubble.
To me, that counts as a prediction confirmed later by empirical observations.
As for Hubble – again, it’s irrelevant if he accepted or not the interpretation of the redsifts. That was his opinion to which he was fully entitled.
As for the bias – yes, any scientific model should be based solidly on verifiable statements, but I’m not that easily going to throw the baby with the bathwater.
In case you must discount some verifiable observations that appear to you or to anyone else to not conform with the model, you should give a thorough justification of why those observations are not relevant. And if you cannot provide this solid justification you’ll need to start questioning, at first, your assumptions / understanding of the model (and here is a very big bias filter) – maybe those observations really fit in there but you don’t see how.
And if this still doesn’t allow room for the observations, then you’ll need to question the model. Don’t throw it right away, but look at what is salvageable, if something, in the light of what can be empirically observed or tested (that is, don’t introduce exotic, non-baryonic dark matter and call the day).
And when / where the model doesn’t provide sufficient explanatory power, use “as if” instead of definitive statements.

Apass,

So, you admit the assumption, which is all I’ve claimed – that the recessional velocity interpretation is an assumption. I guess your argument is that because that assumption can produce an “expanding universe” model which predicts a redshift-distance relationship in agreement with observations, the matter is settled.

It is not, because you can get the same result with the counter assumption – that the redshift-distance relation is not caused by a recessional velocity but is simply a consequence of the loss of energy by an expanding spherical wavefront of light as it traverses the cosmos. To calculate it, all you need do is apply the GR equation for gravitationl redshift to the expanding sphere over significant cosmological intervals, incorporating all the enclosed mass at each iteration. You can do it on a spreadsheet. You get a redshift-distance relation.

To this comment: “… yes, any scientific model should be based solidly on verifiable statements, but I’m not that easily going to throw the baby with the bathwater“, I can only ask, what exactly constitutes the “baby” in your opinion? To me, the BB model is all bathwater, there is nothing salvageable. The situation is exactly analogous to geocentric cosmology; the foundational assumptions are simply are wrong. The model presents a completely inaccurate, not to mention unrealistic account of observed reality.

10 thoughts on “On the Redshift-Distance Relationship

  1. Jeremy Thomas

    In a more general context relatively recent results in the foundations of mathematics lead to the idea that the level of information content(complexity) in the results of a theory can’t be higher than the information content(complexity) of the theory itself(assumptions).
    This immediately imply that any theory always have a limited range of applicability since anything of more information content than the theory will be irreducible formally from the theory unless the theory is modified to incorporate in its assumptions the new irreducible fact( or something logically equivalent ).
    This trivially applies directly to Quantum Mechanics and the meaningless use of some people of a “universal wavefunction” implicitly assuming that QM can be applied to the universe as a quantum object, QM fails miserably on complex classical objects.
    Also this also applies trivially to General Relativity limiting its range of applicability in the presence of very complex assemblies of gravitational objects, and the rigidity of galaxies rotational speeds is just an small example of this limitation, this as it is very well known was the genesis of the ad hoc introduction of “dark matter” trying to explain away this supposedly galaxies anomalous behavior but that had only multiplied the avalanche of contradictions since “black matter” is nowhere to be found.
    Reality complexity is a source of new irreducible properties and “simple” models are intrinsically bounded to fail when facing real complexity.

    Reply
  2. Apass

    First of all – thanks for the opportunity to continue the discussion!
    I’ll make a two part answer. I’ll start with an answer to your last comment.
    “So, you admit the assumption, which is all I’ve claimed – that the recessional velocity interpretation is an assumption. I guess your argument is that because that assumption can produce an “expanding universe” model ”
    The actual assumption of the model is that the universe is expanding. One of the consequences of this assumption is that galaxies seem to recess from us. To me, these are separate points but you seem you conflate them. In the end, I can live with that because this was not part of my initial argument and you can look at it as a chicken vs egg situation.
    My initial argument was that you are discarding empirical observations that were predicted by the BB model – that is, the observed correlation between redshift and distance. Like I said, the simple assumption of an expanding universe does not constrain how the galaxies are recessing.
    However you may want to interpret the assumptions of the BB model, this was a prediction of Lemaitre before Hubble identified the correlation.
    Regarding your model – this would work in a universe devoid of structure. In a structured universe, it would work only on very large distances, in the order of billion of ly. This is because the structure would affect locally the way the light propagates. For instance, accounting for our speed relative to Andromeda galaxy, background objects behind Andromeda would appear to us with a higher redshift compared to objects in other directions. Additionally, objects directly opposed to the Andromeda would appear with a smaller redshift. This happens because Andromeda + Milky Way are a local gravitational well that, according to your model, will alter the propagation of the wave.
    Another point, are you sure that there is enough mass in the universe to account for the observed redshift as you propose?

    For the second part – I skimmed a bit you blog and this confirmed to me that you have a very strong bias.
    From what I saw, you are inclined to filter out many results because you see them as being far from what is empirically observable / testable.
    I’ll illustrate this with your post about LIGO. Indeed, one has to perform extensive signal processing in order to extract some meaningful results from the observations. So much processing is necessary that I can see how one would be inclined to take the end result with a grain of salt. But LIGO actually consists of two sites and there is also the Virgo experiment. I can understand skepticism about gravitational waves reports if they are based on the results from a single site. But when you have two or three sites that agree on the detection (that is, they see a similar pattern at the same time), for me it becomes difficult to accept such a view.
    From my point of view, the correlation between the sites clearly shows that something happened at that moment. Because of this, like I said in the post on the other blog, before discarding the observation, you will need to give a thorough justification.
    At present, gravitational waves are within the GR framework and the LIGO / Virgo results conform with what is expected from GR. If you discard them, then you’ll need to provide another explanation.

    Reply
    1. EmpiricalWarrior Post author

      …the simple assumption of an expanding universe does not constrain how the galaxies are recessing.

      This begs the question, what assumption, then, did constrain the model to that result? I don’t think I’ve read the corrected English translation of Lemaitre’s paper, but something has to so-constrain it. I agree with you though, that we are having a chicken-egg argument here, so let’s set it aside for now.

      There is a still more fundamental assumption underlying the standard model. Both Friedmann and Lemaitre (and subsequently Robertson and Walker) assume the observed cosmos is a unified, coherent, and simultaneous entity that can be modeled with single set of equations. That assumption is almost certainly wrong. It cannot be verified empirically of course, one way or the other, not even in principle. The universal expansion, the recessional velocity interpretation, the big bang event and its inexplicable original condition, are all dependent on that assumption. And that assumption produces a statement of universal simultaneity, the universe is 13.8 billion years old, that is invalid in the context of Relativity Theory.

      As far as the expanding spherical wavefront model goes, it describes a cosmological scale phenomenon, so I’m not sure what the point of your criticism is, that local effects might mask or alter the cosmological effect for closer-in objects? Well yes, so? I also don’t understand what you mean by this:

      …accounting for our speed relative to Andromeda galaxy, background objects behind Andromeda would appear to us with a higher redshift compared to objects in other directions. Additionally, objects directly opposed to the Andromeda would appear with a smaller redshift.

      Redshifts are what they are. They are not subject to the imaginary manipulation you are employing here. Saying that an object might be differently redshifted if it were in a different location is both true and trivial. In the context of the expanding spherical wavefront, the redshifting effect would depend on Andromeda’s incremental addition to the average density of the mass enclosed by the sphere; for any realistic, even nearby, cosmological distance beyond Andromeda, the increment would not have a significant effect on the redshift.

      As to the LIGO situation, it’s a mess and it’s not getting better. For a good, recent overview of the problems, see this post by Sabine Hossenfelder.

      I’ll limit my comments here to an issue Sabine doesn’t discuss with regard to the original, claimed observation. The problem involves an engineering consideration. Essentially the LIGO interferometer system is a piece of classical scale machinery. It is in it’s entirety, including the mirrors, composed of atoms on a scale of 10^-12m; those atoms all vibrate chaotically over an order of magnitude. The signal detection claimed is of a variation in the fixed-length separation between the mirrors on the scale of 10^-20m.

      But on that scale, 100 million times smaller than the atoms composing the system, there is no meaning to a fixed-length separation between the mirrors – because neither mirror has a fixed surface at 10^-12. If you know of an explanation for how such a detection is physically possible, I’d be interested to see it. Whenever I’ve raised the question, the only explanation offered has consisted of some lazy math, with a fixed-length separation between the mirrors at 10^-20 assumed.

      Reply
      1. Apass

        “This begs the question, what assumption, then, did constrain the model to that result?”
        GR

        “It cannot be verified empirically of course, one way or the other, not even in principle”
        In principle, not. However, its predictions can.
        But neither an assumption that the universe is anisotropic can be verified empirically. And it will require several other assumptions without necessarily having more explanatory power.

        “the redshifting effect would depend on Andromeda’s incremental addition to the average density of the mass enclosed by the sphere”
        That is correct only in an universe devoid of structure. What you’re basically saying is that Andromeda should have an equal and instantaneous effect on the wavefront on the opposite side of the universe. We both know this cannot be true.
        “for any realistic, even nearby, cosmological distance beyond Andromeda, the increment would not have a significant effect on the redshift.”
        Can you prove it?
        You avoided entirely the question about the necessary mass. Does the universe contains enough mass (of course, baryonic, otherwise you would invoke a dark matter yourself) to explain the magnitude of the observed redshift?

        For LIGO, like you said, is an engineering problem. You need to recover a signal that has a magnitude in the order of 1e-20 from a noise that has a magnitude around 1e-12 (or 1e-11). The exact measurement units are not important, the can be meters or volts or whatever. Only the ratios are critical here.
        How can you do that? This doesn’t require a very complex model and computations that are obscure. All you need to do is to measure it many, many, many…. times, as the standard deviation of your result drowned in noise basically improves with the square root of the number of measurements you performed. If you throw a coin 10 times to see if it’s fair you’ll get a value close to 0.5 but with an error. You want to improve the result 10 times? No problem – just throw the coin 100 times more (so 1000 in total).
        For LIGO I’ll use your numbers of 10e-20 and 10e-11 (I haven’t checked if the mirrors are cryogenically cooled and what would be the molecule’s vibration in that situation). That means you need to improve the result a billion times (1e9) to get a raw signal (that is signal + noise) to noise ratio of 2. Given that the raw signal is further processed, this raw signal to noise ratio should be ok.
        How do you measure multiple times the deformation?
        In the first step you have the photon bouncing on the mirrors for several times – let’s say 100 (I believe initially there were 40 trips, now there are about 250). This 100 times basically means 100 measurements => one order of magnitude resolved => you only need to cover 1e8.
        This can be covered by counting many photons. Like 100 millions photons, at least.
        The funny thing is that at the wavelength used by LIGO (~1um), a photon has an energy of around 2e-19J so it’s easy to calculate what power they need for the laser.
        So – I said we need to measure 1e8 photons – let’s say that the quantum efficiency of the detector is 50% (i.e. only 50% of the photons are actually measured). That’s a bad figure, but let’s assume it for the sake of it => you need 2e8 photons.
        Additionally, let’s assume that the interference pattern has 9 significant peaks of equal amplitude out of which only 3 are measured by the device => we’re loosing 2/3 of the photons (this is a big waste…) => you need 3x more photons, so 6e8 photons in total.
        Let’s say that the extinction ratio in the optical cavity is 99.9% => only 0.1% of the incident photons are exiting the device => you need 1000x more photons at the input => 6e11 photons needed.
        Given the photon energy, you need 1.2e-7J. That’s 0.12uW for 1s or 0.12W for 1us.
        This 1us would allow 1MHz sampling rate, so a maximum frequency of the deformation of 500kHz. As far as I’ve seen in the graphs they published they go up to about 500Hz, so 1000 times lower. But let’s just assume that they are sampling at 100us (to have 20 samples per period at 500Hz). That means a required laser power of 12mW.
        Their laser has 2x200W beams (at least this is what I understood from this page – https://www.ligo.caltech.edu/page/laser) so around 4 orders of magnitude larger than what I calculated above. That allows a further decrease of around 180x of the error.
        So no, I don’t see a problem to measure a deformation in the order of 1e-20 in the conditions you laid out.
        Their problem is that the noise is not only at a 1e-11 level because there are many other noise sources.
        If I’m going to use 1ms sampling time (i.e. to have a max frequency of 500Hz as they reported), with their laser and my assumptions, they can have a noise level of ~2e-8 and still recover a good raw signal.

        Reply
        1. EmpiricalWarrior Post author

          OK, lets get something straight here, I have no tolerance for disingenuous argumentation. You quoted me out of context then propped up another straw man argument that was unrelated to the original context. Here’s the what I said with the quote you extracted highlighted:

          There is a still more fundamental assumption underlying the standard model. Both Friedmann and Lemaitre (and subsequently Robertson and Walker) assume the observed cosmos is a unified, coherent, and simultaneous entity that can be modeled with single set of equations. That assumption is almost certainly wrong. It cannot be verified empirically of course, one way or the other, not even in principle.

          Here is your response to the extracted quote:

          In principle, not. However, its predictions can.
          But neither an assumption that the universe is anisotropic can be verified empirically. And it will require several other assumptions without necessarily having more explanatory power.

          That is a non sequitur, posing as a response. I said nothing about anisotropy – nothing. You simply inserted it in place of the argument I did make and proceeded to pummel your straw man. Either argue in good faith or don’t argue at all.

          You avoided entirely the question about the necessary mass. Does the universe contains enough mass (of course, baryonic, otherwise you would invoke a dark matter yourself) to explain the magnitude of the observed redshift?

          Well I see you must have done the calculation. Then you know the answer as well as I do – the dark matter component is required. That’s not surprising because the simplistic calculation method I suggested is just Newton’s shell theorem which is as inappropriate on the cosmic scale as it is on the galactic scale. The purpose of the exercise was to demonstrate that a redshift correlated with distance could be generated by applying GR to an expanding spherical wavefront, obviating the need for an “expanding universe” model. To clear up the dark matter mess you’ll need a good mathematician with a solid grasp of physics to come up with an appropriate calculational technique for approximating gravitational viscosity (as Zwicky referred to it). It ain’t me.

          What you’re basically saying is that Andromeda should have an equal and instantaneous effect on the wavefront on the opposite side of the universe. We both know this cannot be true.

          That is indeed what I’m saying and I know that is an accurate statement. It is accurate because in the frame of electromagnetic radiation there is no time dimension. The expanding sphere constitutes a null hypersurface. This is in essence, the same mechanism that underlies entanglement. There is no interval between the photons on the sphere.

          As to the LIGO problem you simply skipped over the objection raised. Let me restate it a little differently. An interferometer works by detecting a variation in the fixed-length separation between two mirrors. At the scale of the atoms of the mirror there is no meaning to the concept of a fixed-length separation between them because the surfaces of both mirrors are vibrating chaotically. Therefore the premise of the detection methodology – two surfaces with a fixed separation – is invalid at the atomic scale. At a scale 100 million times smaller than the atoms there is absolutely no possibility of a signal detection using the interferometer methodology.

          What you do in your answer is simply assume that such a detection has been made, “You need to recover a signal that has a magnitude in the order of 1e-20 from a noise that has a magnitude around 1e-12 (or 1e-11)…“. You cannot recover a quantum-scale signal that does not exist in the context of the classical machinery being employed. This is a physics problem and you cannot paper it over with some math that assumes the problem doesn’t exist.

          Reply
  3. Apass

    I don’t believe that I have quoted you out of context.
    With “In principle, not.” I was agreeing that the assumption, per se, cannot be verified empirically; only its consequences can.
    The next part it was not intended as a straw man and I really don’t understand where is the straw man – it was basically my answer to my understanding of your argument “assume the observed cosmos is a unified, coherent, and simultaneous entity that can be modeled with single set of equations. That assumption is almost certainly wrong”. To me, from “That assumption is almost certainly wrong” it follows logically that you don’t consider that assumption correct so, in your view, a valid assumption wold be that the universe may be not unified, coherent or simultaneous and hence, you need multiple sets of equations to characterize it. That basically makes the universe anisotropic.
    Since this is what I assumed about your position and coupled with your expressed view that modern science is basically decoupled from its empiric foundations – I pointed out that also your assumption suffers from the same defect. You cannot reject an assumption on the grounds that is not empirically testable and replace it with another one that still is not empirically testable.
    If one fights for empirical grounds in modern physics, I strongly believe that he or she must be consistent in the choices made.
    And again, if you reject BB because it cannot make empirically observable predictions, I was assuming that you proposed the propagating wavefront model as requiring fewer assumptions while still providing the same, if not better, predictive power. I guess I was wrong about that. So why did you invoked it in the first place? Just to show that there could be other interpretations for an observation?
    OK – so? Does it have more predictive power than the BB model? Does it require fewer assumptions? Is it based only on empirically verifiable evidence?
    Like I said, if you are to be consistent – please be consistent and argue for empirically testable models that might provide a better understanding than the current BB model.
    As for the wavefront, “There is no interval between the photons on the sphere” would be true only by forgetting the quantum nature of light and treating it only using Maxwell’s equations in a homogeneous medium. That is not the case as, again, that’s ignoring the structure in the universe.
    Now, going to LIGO:
    “At the scale of the atoms of the mirror there is no meaning to the concept of a fixed-length separation between them” – this is a misconception.
    Let me first make an analogy – tell me what is the see level. Given the waves with all the crests and throughs, “there is no meaning to the concept of a fixed” see level. But we can still say that during a hurricane it rose by several meters, overflowing the dams. We can say this because we’re dealing with the mean surface of the see from which the crests and the throughs are random deviations.
    The situation is similar for the LIGO mirrors – I don’t care what is the actual shape of the surfaces. All I care is if I can measure a change in the distance between the two mean positions of the mirrors. And like I said previously, this can be measured to a very good precision by measuring the distance (as badly defined as it is) many times. Maybe for a measurement I hit atoms in their extreme positions, maybe for other measurements I hit atoms in their central positions – but on average, if I’m summing the interference patterns for vasts amounts of photons, I can discern what is the average change in the separation between the two mirrors even if the change is in the order of 1e-20m.
    But let me give you another analogy more tangible to our physical size. How can you measure the width of an thin wire using a simple ruler? If you look closely, you’ll notice that the tick marks on the ruler are badly defined – they do not have a constant width, they do not have a constant pitch (at the scale of the wire width), you don’t have reference points to be sure that you’re using the ruler perpendicularly to the wire. It’s almost like the tick marks have no meaning at the scale of the wire’s width.
    But still, if the wire is long enough you can measure it’s diameter with a precision that can be arbitrarily lower than the ruler’s tick marks precision by simply winding it around a core. You then measure the winding length, and knowing the number of wounds, you can calculate the diameter. With this, you can measure diameters in the order of tens of micrometers with a simple cheap ruler.

    Reply
    1. EmpiricalWarrior Post author

      Apass,

      Because of time constraints, I’m going to break my reply into two parts. Taking the LIGO issue first, the misunderstanding is clear in your analogies, which have no scale relationships comparable to those of the LIGO system. To see what’s wrong with your analogy regarding sea level variations all that’s needed is to scale it properly. So, employing a standard, everyday wave of 1 meter as baseline, in your hurricane example the waves would be 100 million meters and you would have to claim, that under those conditions you could somehow detect a 1m variation of the mean sea level height.

      The analogies also lie comfortably within the realm of ordinary classical mechanics. Your analogies, in other words, are not analogous to the situation under consideration. The LIGO equipment is constructed on a classical scale and LIGO claims to be able to make precise measurements on a quantum scale, in a “mean” or “average”, that essentially has no meaning in terms of the classical scale equipment being employed. According to you (emphasis added):

      All I care is if I can measure a change in the distance between the two mean positions of the mirrors. And like I said previously, this can be measured to a very good precision by measuring the distance (as badly defined as it is) many times. Maybe for a measurement I hit atoms in their extreme positions, maybe for other measurements I hit atoms in their central positions – but on average, if I’m summing the interference patterns for vasts amounts of photons, I can discern what is the average change in the separation between the two mirrors even if the change is in the order of 1e-20m.

      What you are picturing here is a well-behaved, classical system in which “means” and “averages” have meaning. At the claimed detection scale of 10^-20m where the separation between the mirrors is chaotic on a scale 100 million times larger, the “means” and “averages” of your imagining do not exist in a classical sense. To the extent they can be said to exist at all, the “means” and “averages” are themselves chaotic variables. LIGO’s claims of a detection at that scale are a mathematicist fantasy.

      I hope to address your criticisms of the proposed non-unitary model of the cosmos later today. Regards.

      Reply
    2. EmpiricalWarrior Post author

      Apass,

      Your invocation of anisotropy was a straw man argument. Anisotropy is neither a necessary assumption of a non-unitary model nor is it a necessary consequence thereof. I am not interested in your ruminations on the subject, as they are irrelevant to a well-reasoned discussion of the non-unitary assumption.

      You cannot reject an assumption on the grounds that is not empirically testable and replace it with another one that still is not empirically testable.

      I have never said I reject the unitary assumption on the grounds you cite – never. Once again you are making something up – assuming you know something you don’t know. It makes your arguments extremely weak.

      I reject the unitary assumption because the resulting BB model presents a childishly crude, nonsensical picture of a “universe” that does not bear any resemblance to the cosmos we actually observe.There are five fundamental, structural elements that define the BB model:
      1. the big bang event and its inexplicable original condition
      2. the inflation event and its inflaton field
      3. substantival (expanding, curving) space, time, and/or spacetime
      4. dark matter
      5. dark energy

      None of those structural elements are empirically observable elements of physical reality. None of them are part of the cosmos that lies before us. That is why I reject the unitary assumption. I prefer a non-unitary assumption because it eliminates the need for 4 of those 5 imaginary elements. The big bang itself, inflation, substantival spacetime, and dark energy, all evaporate like the morning mist before a hot summer sun. In other words non-unitarity appears to be a simplifying assumption, a very desirable scientific trait. Perhaps you would like to explain why you seem to prefer the retention of those superfluous elements?

      …“There is no interval between the photons on the sphere” would be true only by forgetting the quantum nature of light and treating it only using Maxwell’s equations in a homogeneous medium. That is not the case as, again, that’s ignoring the structure in the universe.

      According to the standard model, the cosmos is homogeneous on large scales and to the best our our knowledge it is also quite dilute – approx 10^-29 g/cm^3. Maxwellian considerations are reasonably appropriate under those conditions, I would think.

      Reply
  4. Apass

    This is just a placeholder for my comments – I fell behind with a work related task so I won’t be able to respond in the next few days.
    Anyway – I’ll need to make a correction to the calculations I made with respect to LIGO – but this is what happens when posting at 2 a.m….

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *