Type Ia supernova light curves are characterized by a rapid rise
from zero luminosity to a peak value, followed by a slower quasi-exponential
decline. The rise and peak last for a few days, while the decline persists
for many months. It is widely believed that the decline is powered by the
radioactive decay chain 56Ni → 56Co → 56Fe, but the rates of decline in
luminosity do not exactly match the decay rates of Ni and Co. In 1976, Rust,
Leventhal, and McCall [19] presented evidence that the declining part of the
light curve is well modelled by a linear combination of two exponentials whose
decay rates were proportional to, but not exactly equal to, the decay rates for
Ni and Co. The proposed reason for the lack of agreement between the rates
was that the radioactive decays take place in the interior of a white dwarf star,
at densities much higher than any encountered in a terrestrial environment,
and that these higher densities accelerate the two decays by the same factor.
This paper revisits this model, demonstrating that a variant of it provides
excellent fits to observed luminosity data from 6 supernovae.
Keywords: Supernova light curves; exponential modelling; radioactive decays;
luminosity data