Tired light and supernovae

More
20 years 6 months ago #9564 by EBTX
Replied by EBTX on topic Reply from
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">More importantly, there is no correlation between half-width and redshift beyond that expected from Malmquist bias ...<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">
So, are you saying that the data is deliberately fudged? Or, does the data accidently or coincidentally fall on a statistical line that just happens to be correlated with redshift?

I'm trying to understand why objective data (redshift measurements and durations) coincidentally supports BB as opposed to tired light. Conversely, is there any way someone could re-appoint the data to show support for tired light without the appearance of fudging?

As I understand it, no selectivity is involved here. The apparent brightness of the supernovae was irrelevant. They simply took whatever supernovae were available for study in the time period available to them. If they had 60,000 to choose from and picked 60, I could expect a bias. However, there were just the 60 (or there abouts).

They did not plot "brightness" but rather the duration from maximum brightness (of various wavelengths) to some specified percentage of that brightness ... whatever that brightness might have been to start with was irrelevant. Where is the Malmquist bias here? Am I missing something obvious?

This duration is plotted on the "y" axis and its redshift on the "x" axis. Again, nothing here is of anyone's choosing. The results clearly favor BB over "tired light".
_____________

For those who do not understand the effect ...

If a signal is sent out and another is sent 1 minute after it ... and ... it travels zillions of miles ... tired light theories predict that the signal itself will be red-shifted but NOT the 1 minute interval between signals.

BB, on the other hand, requires that the 1 minute between signals also be lengthened. So, if the signal is red-shifted to twice its original wavelength, the duration between signals will now take up 2 minutes for those on the receiving end. This is what the data seems to support.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">There is no such thing as a single standard SN lightcurve.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">So ... OK ... then why do the random lightcurves correspond to BB red-shift expectations? If there were absolutely no standard lightcurve, if the situation were completely random, wouldn't variations cancel and form a straight line on the graph ... favoring tired light models?

Please Log in or Create an account to join the conversation.

More
20 years 6 months ago #9677 by EBTX
Replied by EBTX on topic Reply from
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">BB has always used a physically impossible version of "tired light" as a strawman.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">
What does this mean?

How does one construct a "straw man" by citing the most basic tenet of all tired light theories, i.e. that light is red-shifted as a function of travel time? Is this a feature of an "impossible version" ;o) ?

Please Log in or Create an account to join the conversation.

More
20 years 6 months ago #9982 by Jim
Replied by Jim on topic Reply from
Tired light is a comical image but the underlying idea is not as silly as the BB model. The redshift is not caused by recession speed except in models. In other words the real universe is not a model and the mind of man is not better than nature at figuring things. You need to be humble and see that no one has the universe figured out and maybe never will.

Please Log in or Create an account to join the conversation.

More
20 years 6 months ago #9566 by rousejohnny
Replied by rousejohnny on topic Reply from Johnny Rouse
I think the question of what causes the delay is at hand here. At an increasing red shift the supernova would be traveling at closer to the speed of light than one that is at a lower red shift. Is the process of the supernova used to measure the delay affected by the speed of the supernova resulting in a "higher metabolism" via E=MCsq? If so, we could learn much about our Universe from this data.

Is this a possible explaination? This could justify the correlation, could it not?

Please Log in or Create an account to join the conversation.

More
20 years 6 months ago #9791 by tvanflandern
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by EBTX</i>
<br />are you saying that the data is deliberately fudged? Or, does the data accidently or coincidentally fall on a statistical line that just happens to be correlated with redshift?<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">Neither. The comparison models were contrived to give the expected result.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">I'm trying to understand why objective data (redshift measurements and durations) coincidentally supports BB as opposed to tired light.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">The "tired light" model used had an essential component removed, and BB had two adjustable parameters added, to secure this favorable comparison. Without any of those elements, BB loses the comparison.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">Conversely, is there any way someone could re-appoint the data to show support for tired light without the appearance of fudging?<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">Sure. Just use the physically correct tired light model instead of the strawman model.

Here's the difference. Light is a transverse wave, so its wave oscillations are perpendicular to its propagation direction. Just as with any wave, light intensity is equal to the wave amplitude squared. But if light loses energy while propagating, that requires a resisting medium and friction. So light must also lose amplitude while propagating because the resisting medium resists motion in all directions equally, including the transverse oscillations. It would not be physically possible for any isotropic action to selectively take energy from oscillations in the direction of propagation without taking energy from oscillations in all directions.

The upshot of this is that, in physically viable tired light models, light loses both frequency and amplitude at the same proportional rates. But losing amplitude means losing intensity. So light from sources distant enough to be redshifted must also appear intrinsically fainter than the source really is.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">As I understand it, no selectivity is involved here. The apparent brightness of the supernovae was irrelevant. They simply took whatever supernovae were available for study in the time period available to them. If they had 60,000 to choose from and picked 60, I could expect a bias. However, there were just the 60 (or there abouts).<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">Apparent brightness is critical to the interpretation. One cannot usually tell if a given lightcurve is stretched by time dilation or was from an intrinsically brighter SNe Ia. If lightcurve amplitude is altered at the same time the light is redshifted by an energy loss mechanism, that changes the interpretation of which supernova curves are stretched and by how much.

If someone really plotted half-width vs. redshift, that would have a gentle upward trend because of Malmquist bias. However, very few supernovas are seen early enough to measure half-width. So the width is inferred from the measured brightness, which allows inferring the intrinsic brightness if one knows the distance, which allows one to select the correct template lightcurve to be used for fitting the observations. Then and only then can one derive an inferred half-width. But this follows a pack of assumptions, one of which is the (incorrect) redshift-distance law required by BB.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">They did not plot "brightness" but rather the duration from maximum brightness (of various wavelengths) to some specified percentage of that brightness ... whatever that brightness might have been to start with was irrelevant. Where is the Malmquist bias here? Am I missing something obvious?<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">Again, there is no standard lightcurve for the time from peak brightness to half-peak-brightness or any other level. The time interval has a wide range that depends on intrinsic brightness. Malmquist bias enters because we tend to selectively see intrinsically brighter supernovas, on average, with increasing distance simply because the distant faint ones are much less likely to be discovered.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">This duration is plotted on the "y" axis and its redshift on the "x" axis. Again, nothing here is of anyone's choosing. The results clearly favor BB over "tired light".<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">If you have a specific plot in a specific paper you are looking at, I'll be glad to look it over and make more specific comments. But these generalizations have worked well up to now.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">if the signal is red-shifted to twice its original wavelength, the duration between signals will now take up 2 minutes for those on the receiving end. This is what the data seems to support.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">But the intrinsically brightest supernovae naturally have more than twice the signal spacing of the average supernovae. So we must sample the same brightness range of supernovae at each distance for this test to prove that part of the signal spacing is caused by time dilation. That type of comparison using only the brightest supernovae at each distance looks very different from comparisons using all supernovae. -|Tom|-

Please Log in or Create an account to join the conversation.

More
20 years 6 months ago #9570 by EBTX
Replied by EBTX on topic Reply from
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">So the width is inferred from the measured brightness, which allows inferring the intrinsic brightness if one knows the distance, which allows one to select the correct template lightcurve to be used for fitting the observations.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">
What you seem to be saying here is that they did not actually have the whole enchilada in hand for all 60 cases but rather "inferred" missing parts by referring to "The Model"? And therefore, the "completed" data then fit that model retroactively? This would constitute a scientific breach if true.
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">If you have a specific plot in a specific paper you are looking at, I'll be glad to look it over and make more specific comments.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">
The http for the graph is on my first entry to this thread (from Ned Wright's Cosmology site). "In 2001 Goldhaber and the Supernova Cosmology Project published results ..."
<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">Again, there is no standard lightcurve for the time from peak brightness to half-peak-brightness or any other level.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">
My impression is that one would use, as the standard (by fiat), the nearest Type1A supernova that could be found (one that has no significant redshift) ... then ... check any and all Type1As that could be found, observing the time it takes for them to decay to, say, 1/2 brightness ... then compare those times to the behavior of the above arbitrary standard ... and then ... plot those results against the redshift of all those supernovae.

This would then track a graph consistent with BB but not compatible with any possible tired light model. At least, this is the impression I get initially.

If there were (in principle) no intrinsic, standard, light curve in Type1As, then, the resulting graph should be simply meaningless, i.e. random.

<blockquote id="quote"><font size="2" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">Malmquist bias enters because we tend to selectively see intrinsically brighter supernovas, on average, with increasing distance ...<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote">At what point does the Malmquist bias "kick in" for supernovas? Is there not a lower limit to the brightness of a supernova? And ... isn't that lower limit eminently visible to anyone looking in that direction with a proper instrument? What I am getting at ... is ... can there be unobservable supernovas at, say, less than z=.8 ?

If not, then there can be no Malmquist bias within that radius regardless of its actual or supposed distance ... and ... therefore, the bias is not applicable to this class of objects within that radius ... and ... a fair, statistically significant sampling can yield a reliable graph.

Please Log in or Create an account to join the conversation.

Time to create page: 0.334 seconds
Powered by Kunena Forum