With respect to my question why maximum brightness does not happen at minimum radius and how it relates to expansion velocity and the underlying physics, here is a snippet from the introduction to the paper The Cepheid Phase Lag Revisited by Szabó et al. from 2007:
The relative phase between the luminosity and the velocity curves in classical pulsating variables was a puzzle in the early days of variable-star modeling. Because, overall, the pulsations are only weakly nonadiabatic, it was expected that the maximum brightness should occur at maximum compression, i.e., minimum radius, whereas in reality it is observed to occur close to maximum velocity. (To avoid confusion, we note that in this paper we use astrocentric velocities, u = dR/dt.) The luminosity thus has a ∼90° phase lag compared to the one expected adiabatically.
As it became possible to make accurate linear and nonlinear calculations of the whole envelope, including in particular the outer neutral hydrogen region, agreement between modeling and observation was achieved. However, it was Castor (1968) who first provided a qualitative physical understanding of this phenomenon. He pointed out that during the compression phase, for example, the hydrogen partial ionization front moves outward with respect to the fluid, and energy is removed from the heat flow to ionize matter. Because of this temporary storage of energy—as in the charging of a capacitor—the H acts like a low-pass filter, causing a phase lag of close to 90°. However, the exact value of the phase lag depends sensitively on details of the stellar model and can only be obtained by detailed simulations. An understanding of the phase lag involves physics both in the linear and the nonlinear regimes. This characteristic makes it an ideal benchmark to test existing hydrocodes against observational constraints.
Edited by terrain_inconnu, 30 March 2025 - 01:55 PM.