Wednesday 24 February 2010

Half-life and Health Risks

Imagine radioactive material is accidentally released into the environment. Which is the most hazardous half-life for it to have? A few hours? A few years? Millions of years?

To answer this question we need to think about why radioactivity decreases with time. The simple answer is that every time a nucleus decays and releases a particle (like an alpha or beta) then there's one fewer undecayed nucleus left. For a given isotope every nucleus has the same chance of decay each second. It doesn't matter how long it's already been around for or what its neighbours are doing.

If the chance of decay is high then lots of nuclei decay each second (so lots of radiation is given off) but you quickly end up running out of undecayed nuclei. This means the half-life is short.

If the chance of decay is low then very few nuclei decay each second (so very little radiation is given off) and there are still lots of undecayed nuclei left a long time later. This means the half-life is long.

The key point is that isotopes with a very long half-life are only very weakly radioactive and isotopes that are very radioactive don't stay radioactive for long.

It turns out that the most problematic half-life for the environment is a few decades or so. Isotopes such as strontium-90 (with a half-life of about 30 years) are pretty radioactive and stick around for a time comparable to a human life-time, which gives plenty of opportunity for them to cause genetic damage.

In this activity you go forward in time to see the effect on radioactivity of different samples.

Wednesday 17 February 2010

General comments about Radioactivity and Atomic Physics Explained


Please use this post to add any comments or thoughts about Radioactivity and Atomic Physics Explained.

Wednesday 10 February 2010

The Constant Current Misconception


If you have a simple battery and bulb circuit and you add another bulb in parallel, would you say the current now 'splits' at the junction?

Or imagine starting with the same simple circuit and adding a bulb in series. Could you explain the fact that the bulbs are both dimmer by saying that the battery’s energy is now shared between two bulbs?


If you think either of these explanations seem pretty reasonable then you may hold the constant current misconception.

The constant current misconception is the implicit belief that batteries are constant current providers.

In the parallel example there isn't a 'the current' to split. When you add the extra bulb in parallel then the current drawn from the battery doubles, it doesn't just split differently.

In the series example the assumption is extended to imply that batteries provide energy at a constant rate. They don't. When the extra bulb is added in series then the battery provides energy at half the rate. It doesn't provide energy at the same rate and then share it out differently.

Batteries are constant voltage providers (as long as you don't make them work too hard) and the current they provide depends on the circuit they are connected in. Any change to the circuit will always change the current.