About Me

My photo
Science communication is important in today's technologically advanced society. A good part of the adult community is not science saavy and lacks the background to make sense of rapidly changing technology. My blog attempts to help by publishing articles of general interest in an easy to read and understand format without using mathematics. I also give free lectures in community events - you can arrange these by writing to me.

Saturday, 27 February 2016

Optogenetics - Controlling Neurons with Light - a Way to Understand Brain Circuit Organisation?

What is Optogenetics:  Optogenetics is the combination of genetics and optics to control well-defined events within specific cells of living tissue. It also includes the discovery and insertion into cells of genes that confer light responsiveness.

How does Optogenetics work?: Certain algae respond directly to a light source. The algae detects and moves towards the light. The question is - can this ability of the gene in the algae be used to control cells in more complex organism?  We are able to ask this question because of advances in genetic technology that allow particular genes to be inserted into the DNA of a virus, which on infecting humans and other animals, transfers the algae gene to the DNA of the host.

Figure from:  http://neurobyn.blogspot.co.uk/2011/01/controlling-brain-with-lasers.html
Optogenetics gives us control on exciting or switching off neurons and provides an effective tool to study the effects of neuron activity on the functioning of the brain.

About forty years ago Francis Crick, Nobel Laureate of the Double-Helix fame, had commented that the then existing methods used in neuroscience were either too crude, lacking precision (inserting electrodes in the brain) or too slow (drugs targeting particular cells).  Crick had mused that light might have the properties to serve as a control tool but at the time neuroscientists did not know how to make specific cells responsive to light.
In 1999, Crick stated that one of the major problems of biology is discovering how the brain works;  when we finally understand, at the cellular and molecular level, our perception, our thought, our emotions and our actions, it is more than likely that our views about ourselves and our place in the Universe will be totally transformed.”

At present, the best tool to study the brain is fMRI  - it can reveal what various regions of the brain are doing when people respond to a stimulus.  fMRI signals measure the increased levels of oxygenated blood (BOLD) in particular parts of the brain.  Neuroscientists believe that the signals are caused by an increase in the excitation of specific kinds of brain cells. (The two slides at the end of this post describe fMRI method in more detail.  For MRI click here)
Karl Deisseroth pioneered the work in optogenetics at Stanford and showed that neural excitation indeed produces positive fMRI BOLD signals.

Optogenetics holds great promise for the study of brain circuitry and has been used in vivo to record neural activity patterns with millisecond precision and to create a wireless route for the brain. 
The 2014 Nobel Prize in Medicine was awarded to a team of researchers using optogenetics to map the function of new types of brain cells. http://www.nobelprize.org/nobel_prizes/medicine/laureates/2014/presentation-speech.html

Optogenetics can also be used to address cell behaviour in other parts of the body and for diagnostics and treatment of diseases.  I describe a proposed application of optogenetics to help sufferers of retinitis pigmentosa (RP): an incurable genetic disease that leads to blindness as it destroys rods and cones in the eye. 

A non-pathogenic virus is altered to contain a light-sensing protein (ChR2) from algae.  The virus is then injected into the ganglion cells in the eyes of RP patients.  The idea is that the genetically altered ganglion cells will be sensitive to light, giving back some vision to those afflicted by the progressive disease RP.

Because the eye is naturally exposed to light, it’s the perfect venue for a trial like this one, which seeks to switch the photoreceptive burden from the compromised rods and cones to ganglion cells in the retina.
The ganglion cells receive fewer photons and it is not clear exactly what visual granularity can be achieved here. If the experiment succeeds, the researchers expect that the experimental cohort will get monochromatic vision at very low resolution.   RetroSense CEO Sean Ainsworth said that he hopes the treatment will allow patients to “see tables and chairs” or even read large letters. Grainy, low-resolution monochromatic vision might not sound like much compared with what humans normally perceive, but these efforts are important steps on the road to long-term vision restoration. Rough shapes and gray-scale projections are a far better alternative to total blindness.  The four minute video is also of interest in this context:  http://shows.howstuffworks.com/fwthinking-show/3-ways-we-could-restore-sight-blind-video.htm

In concluding this post, it is to be noticed that the light-based control of ion channels has been transformative for the neurosciences, but the optogenetic toolkit does not stop there. An expanding number of proteins and cellular functions have been shown to be controlled by light. The field is moving beyond proof of concept to answering real biological questions, such as how cell signalling is regulated in space and time, that were difficult or impossible to address with previous tools.
For a comprehensive discussion of optogenetics, please click here.  I quote their abstract:
Fundamental questions that neuroscientists have previously approached with classical biochemical and electrophysiological techniques can now be addressed using optogenetics. The term optogenetics reflects the key program of this emerging field, namely, combining optical and genetic techniques. With the already impressively successful application of light-driven actuator proteins such as microbial opsins to interact with intact neural circuits, optogenetics rose to a key technology over the past few years. While spearheaded by tools to control membrane voltage, the more general concept of optogenetics includes the use of a variety of genetically encoded probes for physiological parameters ranging from membrane voltage and calcium concentration to metabolism. Here, we provide a comprehensive overview of the state of the art in this rapidly growing discipline and attempt to sketch some of its future prospects and challenges.

Saturday, 20 February 2016

A Promising Development - Eternal 5D Data Storage in Fused Quartz

This post involves a number of prefixes on units which not everybody will be familiar with  
These are explained in the slide at the end of the post
Blog Contents - Who am I?

Scientists at the Optoelectronics Research Centre (ORC) at Southampton University have taken a significant step in solving the problem of archiving large amounts of data

What Have They Done? -   Developed the recording and retrieval processes of digital data by femtosecond laser writing on fused silicaTheir portable memory storage has data capacity of up to 360 Terabyte per disc, is thermally stable to 1,000°C and has virtually unlimited lifetime at room temperature (13.8 billion years at 160°C ).  
The technology could be used by organisations with big data storage requirements, such as national archives, museums and libraries, to preserve their information and records.
How is big data stored at present and why do we need bigger storage capacity? -  At the moment, the longest-lasting storage technology in the world is the  M-Disc which uses Blu-Ray technology to store data for up to 1,000 years.  For personal data storage - flash disc storage lasts for a few years.
However, most data centers handling large amount of data use hard-disc drives (HDD) which are expensive for loading data.   HDD are unsuitable for long-term storage and require transferring of data about every two years. 

Total capacity of data stored has been increasing by about 60% each year and we shall need to manage 40000 Exabyte of data by 2020.  HDD  power consumption is of the order of 0.04 Watt per Gegabyte of data stored.  The power consumption of American data centers alone is expected to reach 140 billion kWh (one kWh is equal to one unit of electricity) per year costing $14 billion @10 cent per unit.
What is needed is a less expensive way to store data which can stay secure for a very long time.  
This is exactly what scientists at OCR have achieved in their new storage device combining the best of laser and nano-technologies (NT). For an introduction to NT for non-specialists, you can look at my course notes available here.  Talk-5 deals with digital revolution. 
What is the Technology? -   (I am much obliged to ORC for providing me a clear description of their technology).  Ultrafast laser induced nanogratings in fused quartz, the key of the eternal 5D memory storage system, were first discovered by Professor Peter Kazansky at ORC. These nanogratings exhibit some extraordinary properties such as extremely high thermal and chemical stability, as well as ability to manipulate transmitted light.
In conventional optical media, such as DVDs, the data is stored by burning tiny pits on one or more layers on the plastic disc using three spatial dimensions. When the data-recording ultrafast laser marks the glass, it doesn’t just make a pit. It makes a pit with the self-assembled nanogratings that are the smallest embedded structures ever produced by light. The orientation (4th dimension) and strength (5th dimension) of these nanogratings implemented as two additional parameters increase the amount of digital data held per pit. During retrieval, these 2 extra dimensions also interact with oncoming light, modulating the transmitted light from which we can derive the information stored in the 5-dimensions. The estimated ultimate capacity, which could be achieved with the 5D data storage technology, is 360 TB per disc.

The recording system uses an ultrafast laser to produce extremely short (femtosecond - a million billionth of a second) intense laser pulses of light. The file is written in up to 18 layers of nano-structured dots separated by 5 micrometers in fused quartz.  The self-assembled nanostructures change the way light travels through glass, modifying the polarization of light, which can then be read by an optical microscope, and a  polarizer similar to that found in polaroid sunglasses.

What is the future? -  As with any new technology, it takes time to reach maturity and practical demonstration is important.  OCR have already demonstrated the efficacy of 5D data storage by writing some important documents on their glass discs.
The next step is obviously to commercialize the technology for wider uptake.  
The places where I see 5D storage to be most useful is in data archiving.  I am not sure how expensive the retrieval system will be with state of art laser systems required.  

As I have been emphasizing in my publications, the new technologies are progressing rapidly and delivering marvelous new inventions/discoveries.  But, it is in the synergy that the real promise lies - where two or more of the new technologies come together and yield benefits far in excess of what any one of them could have hoped to achieve individually.

Post Script:  When I first read the research as 5D Data Storage, I started to figure out what the five dimensions could be.  Physicists understand the three space dimensions and they are happy to accept the fourth dimension of time.  Space-time form the four dimensions as far as physics is concerned.  Obviously technologists do not follow the same nomenclature and that causes confusion.  3D printing was fine but now we have 4D printing also.  5D data storage uses three dimensions of space and two parameters of nanogratings in fused quartz.  I would have called the system 3D2P storage.  Just a thought - apologies if this does not go down well.  
Prefixes for units

Thursday, 18 February 2016

Why Does Steam Cause More Severe Burns Than Hot Water? - Physics of Thermal Burns

Blog Contents - Who am I?
(Click on a slide to view its bigger image)

A question I am often asked is why steam appears to be so much more effective than hot water in causing burns. Majority of thermal burns are caused by a momentary contact with a hot agent - water, steam, hot iron. Our body reacts very quickly to move away - the contact time would be of the order of our reaction time; let us say the reaction time is tenth of a second or 0.1 sec.  During this short time, heat energy is transferred to the local spot on our outer skin (epidermis) and raises its temperature to cause the burn.  Slides at the end of the blog provide information about the structure of human skin.  The reason steam causes severe burns is because it carries much more energy than water - we shall return to this after some preliminaries.

Epidermis starts to get damaged at temperatures above 44 C.  If the temperature of water is higher, then more heat energy will be transferred to the tissue and the damage will happen more quickly and severely. One talks of damage to the skin in terms of the degree of burn - first degree burn is mildest while the fourth degree burn is really severe life threatening.  Slides  at the end of this blog tells you about classification of burns.
For third degree burns to happen, the time of contact is as follows

1 second at 69 C water temperature
2 seconds at 65 C
5 seconds at 60 C
15 seconds at 56 C
Notice that the time for burn is not linear and reduces rapidly as the temperature of hot water increases.
For boiling water at 100 C it takes much less than 0.01 sec for burn to happen - third degree burn is very serious - first and second degree burn happen for even shorter contact times.
Once you splash boiling water droplets on your skin or touch a hot iron, a burn shall happen.  The heat energy is rapidly conducted away to surrounding tissue and this limits the size of the burn.  If you splash lot of boiling water then area of contact is greater and heat conduction from the central part is poor and the resulting burn is more serious in the centre.  Hot oil causes even more serious burns because oil tends to be hotter (greater than 100 C) and is sticky - does not fly off the skin as rapidly as water does.

Before looking at scalding by steam, let us consider the threshold energy that can cause burns.  I am able to find data for arc lamp induced second degree burns.  Since skin burns mainly depend on the temperature increase of the skin, the data can be used to get some idea about the amount of heat required to cause a second degree burn by water as well.  The IEEE P1584 and NFPA 70E standards state that a second degree burn is possible by an exposure of unprotected skin to an electric arc flash above an incident energy level of 5 J per square cm. 

The figure shows the time of burn for different energy flux (amount of energy delivered to one square cm of the skin per second).  The higher the energy flux, the shorter is the time for second degree burn.  

Now, we are ready to talk about steam and hot water burns.
The idea here is that both hot water and steam deposit energy on the skin but the rate of this energy deposition is greater for steam than it is for water because steam carries much more energy than hot water.  To explain this, we need to do some interesting physics.
Think what happens when you heat water in a container: 
Water temperature increases steadily until it reaches 100 C. The temperature stays at 100 C but some water is converted to steam - the conversion continues until all of the water has changed into steam - temperature stays at 100 C.  Where is this heat energy going - it is being used up in breaking the bonds between different water molecules and making them free. This energy is called the latent heat of vaporisation of water.  Latent - because the energy is hidden and has not resulted in a temperature change. This is shown in the slide 

The interesting thing is that to raise the temperature of 1g of water from 0 to 100 C requires 418 J of energy. But to convert 1g of water at 100 C to steam requires 2260 J of energy -- 5 times more energy.  

If water or steam touches the skin then its temperature will drop very quickly to about 40 to 50 C (Skin temperature is about 36 C) and 1g of water at 100 C will give up 209 J of energy to the tissue.  Steam will give up the latent heat + 209 J or 2469 J of energy to the tissue - this is almost 10 times more energy. But we must remember that steam is much lighter than water and we shall receive only a small amount of steam on the skin.

I think the actual situation is as follows:
Boiling water is a mixture of water at 100 C mixed with lot of steam.  Burns due to boiling water are exacerbated by the presence of steam.  Water at a lower temperature - say 70 or 80 C will have no steam mixed with it and a drop of lower temperature water will not cause as serious burn as a drop of boiling water.

Structure of the Skin

 Classification of Thermal Burns

UPDATE (February 2019):  A recent article in Medscape on thermal burns is a must read for its detailed medical aspects presented in a way that is easily understood by non-medics:




Friday, 12 February 2016

Gravitational Waves - Theory of General Relativity - Background and Historical Perspectives

Blog Contents; Who Am I?

Everybody is talking about the successful detection of gravitational waves (GW) at aLIGO (advanced Laser Interferometer Gravitational-Wave Observatory). 
Einstein's theory of general relativity has passed all tests.

The observation of binary pulsars in 1974 by Taylor and Hulse had confirmed the emission of GW in exact accordance with the predictions of General Relativity.  It is the actual observation of GW by aLIGO that has now  provided the final and the most stringent of tests.

The discovery itself has been covered extensively in many places - I provide some of the links
Abbott et al (LIGO Collaboration) - Original Research Paper
Physics World
Science Daily

The excitement of this discovery is being felt throughout the world.  As it happens with many scientific discoveries, the general public tends to have short memories and the research is forgotten very quickly.  Theory of relativity is a very difficult concept to digest - even Einstein had great difficulty in convincing his fellow scientists about his theory. (Einstein was awarded Nobel Prize in 1921 - not for his theory of relativity but for explaining photoelectric effect - such was the problem with scientists able to grasp what Einstein was telling them).  It is important that some historic perspective is provided for the discovery.  The UK newspaper  Independent has an excellent introduction about the background to the discovery of gravitational waves.  

I had published my PowerPoint slides relating to a community outreach course on Einstein's Theories of Special and General Relativity and this is an excellent source for non-specialists who wish to learn more about Einstein's biography and his theories.

In the following I reproduce the slides from my course that relate to the 1974 discovery of binary pulsars by Taylor and Hulse for which they were awarded the 1993 Nobel Prize. Their observations had confirmed the emission of gravitational waves (GW) and provide an important landmark.  Some slides explaining the physics of black holes are also included.

In 1919, Eddington provided the experimental test for General Relativity when he observed the bending of light from distant stars by the gravitational field of the Sun - Newton's theory of gravitation  just cannot explain this effect but general relativity predicted the exact number.  

For the pulsars, the effect is 100,000 times greater than for Mercury's orbit and Taylor and Hulse's observation were really wonderful in establishing the general theory on a firm footing.

Since Taylor and Hulse's discovery of the first binary pulsar system, other binary pulsars have been observed .  Next slide describes a pair that has four times larger annual advance of periastron (point on the orbit when the distance between the stars is the least).


APS has free access to important papers on General Relativity -  for how long? - I do not know.

Tuesday, 9 February 2016

Future of Privacy - What is Privacy and Why it is Important? (Part 1)

Blogger Profile - Who am I?  (Send comments to ektalks@yahoo.co.uk)

Privacy is an emotive topic and in these days of 'connected people', any mention of privacy draws strong reactions - and not without good reason.   Over the last three decades, accelerating technological advances have transformed the way we interact with fellow humans. Data flow, hence information flow, has increased beyond expectations.  Humans have always adapted to new situations but it takes time - may be a decade or more which is too slow to cope with the current rate of increase in information flow.  This has created strains between different generations.  Moral and ethical values are no longer a given but each individual struggles to find his/her own codes - many a times, not very well.  The confusion leads to behavioural problems and affects the cohesion in our societies. Naturally, with weaker societal benchmarks, emphasis has shifted to individualism - looking after self. This has created its own paradoxical situation because individualism conflicts with the erosion of privacy that accompanies with the tide of information flow.

Before going further, I would like to understand what one means by Privacy.  
I like the punchy statement - Privacy is the right to be left alone. 
Oxford Dictionary (OD) - Privacy is freedom from intrusion or public attention; avoidance of publicity. 
UN Declaration of Human Rights says:  No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor or reputation.  Everyone has the right to the protection of the law against such interference or attacks.
All very 20th Century!  
I find protection of law part ironic.  Law has traditionally only protected the rich and as we shall see in my forthcoming publication about the 'Future of Law' the situation is only going to get much worse for the common man.  

The concept of Privacy has changed over time and is being continually redefined.  In older societies when people lived in small communities, everybody in the village knew everything about all others.  Houses were small and combined families were common. OD definition of 'freedom from intrusion or public attention' just does not hold for ancient societies. Cities and crowded places, paradoxically, gave people more scope to be their own where everybody is too busy to worry about fellow beings and some sort of privacy (OD style) was possible.  
In the modern context, Privacy sets a goal of what is desirable but seldom achievable.  The goal is laudable as freedom from intrusion brings its own benefits - one can feel relaxed and happy and could possibly be more original and creative.  Having said this - I think some of the greatest contributions in art, music, literature, science... were made by people who were unhappy and stressed.  Such are the complexities of the human mind!

What does the public think about the threat they feel from different organisations? - the slide shows the result of a recent survey: (Click on slide to see bigger image)
Something really interesting is apparent from the survey. Less than 3% people surveyed feel that they have trust in their government to keep data about them secure. Private business is trusted far more.  This is totally understandable and we shall have a lot more to say about this in Part 3.

Let us first look at the reasons why Privacy is considered important.  As we shall see later, rapid advances in technology will result in a wholesale redefinition of Privacy. Even now, some people, mostly government security czars, dismiss Privacy as unimportant for people who have nothing to hide.  
Daniel Solove has explained why Privacy matters.  Let us look at some of his reasons:
a.  Personal Data:  The more someone knows about us, the more power they can have over us. It can be used to affect our reputation, influence our decisions, shape our behaviour. It can be a tool to exercise control over us.
b.  Privacy enables people to manage their reputation: How we are judged by others affects our opportunities, friendships and overall well-being. Even after knowing the truth, people judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy.  Privacy helps people protect themselves.
c.  Privacy helps people to establish appropriate social boundaries from others in society  
d.  Privacy ensures trust in relationships:  In professional relationships, e.g. with doctors, lawyers.., trust is key.  Trust broken in one relationship makes it more difficult to establish trust in new relationships.
e.  Privacy is key to freedom of thought - be it exploring ideas outside the mainstream, ideas that family and friends dislike, or political activity.
f.  Privacy nurtures the ability to Change and have Second Chances without being shackled by past mistakes - allows people to reinvent themselves
g.  Privacy matters because one does not have to explain or justify oneself all the time - It can be a heavy burden if we constantly have to wonder how everything we do will be perceived by others who might lack complete knowledge and/or understanding 

Of course, absolute privacy is also undesirable - that will be isolation from the society.  Humans are gregarious by nature, they love company and readily talk to others about personal matters.  The act of living in a society requires surrender of some of your privacy voluntarily and most people are comfortable with this but as the survey has indicated they  are also concerned that our institutions - companies and governments - do not have the ability/means/willingness to keep their private data secure.  We shall look at this in more detail in Part 3. 

The reason Privacy is a 'hot' topic just now is the latent ways in which peoples' privacy is compromised and they appear to have no control on it.  New advances in digital and nano technologies allow effective collection and manipulation of personal information.  Private information given in good faith at one place can be combined with other information about you to generate undesirable loss of privacy that you had never agreed to.  In fact it might be fair to say that with digital information flow, privacy may be impossible to defend.  We shall look at the question of physical and digital privacy in Part 2.

A development in technology that is hailed as the new paradigm is the arrival of the Internet of Things (IoT).  This promises to deliver a connected world. Implications of IoT for Privacy are far-reaching and somewhat frightening - particularly in terms of lack of security and making people more vulnerable to cyber crimes. We shall look at this in Part 4.  

Friday, 5 February 2016

How Old is the Milky Way and how do we know that?

Blog Contents and who am I?

(this publication is somewhat more advanced 
than the usual outreach entry)
Please click on a slide to see its bigger image

In previous blogs, I had discussed the use of the radio-isotope dating method for determining the age of the Solar System (and hence of the age of the Earth) and also the age of the Oceanic Crust which established the theory of Plate Tectonics (Continental Drift) on a firm footing.  Radio-isotope dating has really helped geologists in their struggle to understand the history and structure of the earth. Radio-isotope Dating (RD) can also be used to provide an age of our galaxy, the Milky Way.  
The origin of the MW is closely related to the origin of the Universe which has a definite time of creation - the time of the big bang.  The age determined for the MW is similar to the currently accepted age of the Universe and it is generally accepted that formation of galaxies started within about 200 million years after the big bang.  MW is a large diffused structure in which stars are still being created.  Therefore, the age of the MW is best determined by finding the oldest stars and measuring their ages.
When we talk about ages that are billions of years old then it is natural to ask what type of clocks do we use to measure such vast spans of time and how reliable these measurements are.  In this blog, we shall have a critical look at this.
In the case of the Milky Way (MW), the situation is actually quite good. Ages measured by RD are complemented by several other methods that rely on completely different physics and are independent determinations.  Unfortunately, the history of determining an age for the MW is full of inconsistencies and confusion.  However, over the past 20 years or so, the situation has improved remarkably and good agreement is found among the results from different methods. I feel that a consensus value of 12.5 to 13.5 billion years for the age of the oldest stars in our galaxy has been obtained. 

First, let us learn briefly how the Milky Way looks.  It is a large spiral galaxy with a central bulge.  Our Sun is located in the Orion spiral arm about 25000 light years (ly) from the centre of the MW.

Outside the plane of the Milky Way there are globular clusters which contain some of the oldest stars - also called the halo stars.  Stars of all sizes were formed in a globular cluster at the same time and have evolved according to their formation mass.  Heaviest stars have the shortest life and their evolutionary path 
takes them to a supernova stage in a time scale that can be as short as a few million years. Smaller mass stars have much longer evolutionary time and can survive for much more than 20 billion years.     
It is thought that the Milky Way (MW) disk and the globular cluster (GC) stars were formed from a primordial gas cloud. The oldest globular clusters coalesced early on from small-scale density fluctuations in the cloud.  Subsequently the cloud collapsed settling into the disk that is the Milky Way Galaxy with the globular star clusters populating a roughly spherical halo in our galaxy.  This is demonstrated in the slide

Star formation in the Milky Way continues to this day and Open Clusters are regions where new stars are being born. When we talk about the age of the Milky Way, we refer to the oldest objects in it and that is why there is so much interest in determining the age of the old globular clusters to indicate the time when star formation started in the outer regions of the Milky Way. Primordial gas consisted mostly of hydrogen and helium.  Elements up to iron were synthesized in stars with the more massive stars creating elements heavier than iron during their supernova stage.  The oldest stars are deficient in elements heavier than helium (their metallicity is low) with the younger stars progressively increasing in metallicity as the gas clouds continue to acquire heavier elements from the demise of big stars (heavy stars have very short lives of tens of millions of years).

The next slide shows a 2016 study of the time of star formation in different regions of the disk of the Milky Way. Over 70000 red-giant ages are plotted to show that star formation started near the centre of the disk and spread to outer regions of the disk.
red giant
The youngest stars are blue; the oldest are red.
For a determination of the age of the globular clusters,  the faint small mass stars are the best candidates - they have the longest lives and have been shining since their formation with little or no disturbance.  Also during their formation stage, they had obtained elements, including radioactive actinides, that were synthesized in supernovae explosions of massive stars or other catastrophic astrophysical events like neutron star mergers.  More of this later.

Radio-Isotope Dating (RD)
Our Sun is a relatively small star and the Solar System was formed 4.56 Billion Years ago from the gravitational collapse of a low density cloud of matter - the Solar Nebula.  The Solar Nebula would have been enriched in elements synthesized in catastrophic events happening in the neighbourhood regions until the formation of the Sun (the Sun contains more than 99% of the original mass of the nebula) - at which point it became a closed system to further addition of more elements.  The planets, asteroids, meteorites also had their elemental composition frozen 4.56 By ago.  
In RD, the nuclides of interest are those which decay with a half life of the order of the age of the Milky Way - a few billion years (by).   From the slide, we notice that Th-232 (half-life =14 by) and U-238 (half-life = 4.47 by) are useful actinides for dating purposes.  
In a supernova explosion, these two nuclides are created by r-processes.  In a r-process, the neutron flux is very high and nuclides which are rich in neutrons (almost along the neutron drip-line)  are formed in a very short time.  The highest mass nuclide formed is Cf-254 which fissions spontaneously with such high probability that further neutron absorption does not have time to occur.

As soon as they are produced these neutron-rich nuclides start decaying at a rate determined by their half-lives (Half-life is the time for half of the amount of a nuclide to decay).    

First, we shall estimate the age of the MW in a very simple model assuming that  the nucleosynthesis of Th-232 and U-238 happened in one event only.  The production of actinides by r-processes can be calculated from nuclear physics and prevailing physical conditions at the time.  Truran  gives the r-process production ratio in a supernova event of Th-232/U-238 as 1.65+-0.20.  Both Th and U have been decaying since their formation and the present ratio is 3.6

This model would give the lower bound on the age of the galaxy as 9.6 billion years (see slides)

Nucleosynthesis in a single supernova is a very simple, zeroth order but instructive model.  There is lot of evidence that a uniform rate of nucleosysnthesis had happened over the galactic history. Using this more realistic approach Truran arrives at an age of the Milky Way equal to 12.8 +- 3 billion years.

A word of caution must be added here.  The theoretical model used to calculate the production ratio P(Th/U) in r-processes is not totally secure.  Production of neutron-rich nuclei in r-processes follows a path that is far removed from the valley of nuclear stability while the parameters of the nuclear model are fitted to measured properties, like fission probabilities, neutron capture and beta-decay rates,  of available nuclei.  Extrapolation of such nuclear theory parameters can be an uncertain process.  Nicolas Dauphas has used the Th/U ratio in meteorites and data from low-metallicity stars in the halo of the Milky Way to constrain the production ratio P(Th/U) and has obtained a value 1.75 (+ 0.005; - 0.01) and has calculated an age of the Milky Way of 14.5 (+2.8; -2.2) billion years.  This age is consistent with the currently accepted age of the Universe of 13.7 billion years. 

The uncertainty in the RD age is large and we now look to other methods which have been used to estimate an age of the Milky Way

Cooling Rates of White Dwarfs
A white dwarf is the core of a collapsed star. White dwarfs cool slowly and fade away over a time scale of 10 billion years or more. The Universe and the Milky Way (MW) are not old enough for many white dwarfs to have cooled off completely to become invisible black dwarfs. White dwarf temperatures can therefore be used as "cosmic clocks" for an independent estimation of the age of the Milky Way. Where do we find old white dwarfs?

Located 7,000 light-years away in the direction of the constellation Scorpius, M4 is the nearest globular cluster to the Earth. Globular clusters like M4 were born early in the history of the Milky Way. M4 is estimated to be about 13 billion years old and all of its stars that began with 80% or more of the Sun's mass have already evolved to become red giants, followed by a collapse to a white dwarf. (Our Sun will not become a white dwarf for another five billion years.)
M4 globular cluster might contain about 40,000 white dwarfs. Due to their small surface area, White dwarfs are extremely faint but in 1995, Hubble Space Telescope had already  detected more than 75 white dwarfs in a small area of M4.  

A white dwarf contains most of the original mass of a star, that has contracted to an extremely dense object about the size of the Earth.  A tea-spoon full of a white dwarf material would weigh more than a ton. 

The mechanical structure of a white dwarf is sustained by the pressure of degenerate electrons and they do not generate any energy by fusion reactions. This makes the physics of cooling of a white dwarf particularly straightforward.
Because of its small size, high density, and initially hot temperature, it takes more than 10 billion years for a white dwarf to radiate all of its residual heat into space. The science of white dwarf cooling is reasonably well understood and from the spectral profile and temperature, white dwarf age may be estimated. 
In 2004, Harvey Richer measured 600 white dwarfs in globular clusters and found that the oldest were 12.7 +- 0.7 billion years old. The stars which collapsed to form white dwarfs must have taken at least a few hundred million years to complete their life cycle.  This suggests that the age of the globular clusters would be in the region of 13 billion years.
It is interesting to extend this study to see if there is an age difference between the formation of globular clusters and the formation of the Milky Way Galactic Disk.  Brad Hensen and colleagues have found that  the limit of visibility of white dwarfs in M4 is about 10 times fainter than the local Galactic disk white dwarfs.  This demonstrates a significant age difference between the Galactic Disk (7.3 +- 1.5 billion years) and the halo globular cluster M4 (12.7 +- 0.7 billion years).

Main Sequence Turn-Off Stars in Globular Clusters

In the 2003 special issue on Globular Cluster, Krauss and Chaboyer describe how the ages of the oldest metal poor stars in the halo of our galaxy can yield a firm lower limit on the age of the galaxy; thus also for the Universe.

How stars evolve is very well understood and the physics of stellar evolution can confidently explain the life-cycle of stars of different initial masses - particularly the change of luminosity (total power radiated) and surface temperature as a function of time.  The most robust prediction of the stellar evolution model is the time for a star to exhaust the supply of hydrogen in its core, time on the main sequence, and advancing beyond to the turn-off point.  

Globular Clusters are collection of thousands of stars that were born at the same time and are almost at the same  distance from the observation point of the Earth.  This makes interpretation of the luminosity, temperature data of globular cluster stars more straight forward.

Krauss and Chaboyer find the best-fit age of the oldest globular clusters to be 12.6 billion years and give a 95% confidence level lower limit on their age  to be 10.4 billion years but an upper limit of 16 billion years.

By necessity of keeping this blog of reasonable length, I am deciding to conclude this very interesting subject.  There is a great deal of exciting stuff to learn here.  For example:

All methods give an upper bound of the age of the Milky Way that is larger than the age of the Universe.  Universe must have been created first and the numbers do look paradoxical in some ways.  This situation has persisted for about a 100 years.  At one stage in the early 20th Century, The Earth was being measured to be 10 times older than the then accepted age of the Universe.  What do we think of the present situation??
Even if MW is a few hundred million years younger than the Universe, our galaxy must have been one of the first ones to have formed.  Even some of the oldest halo stars observed have been found to contain actinides which can only be synthesized in a supernova type catastrophic astrophysical event and, therefore, are not the oldest stars. 
The age of the universe is dependent on the value of the Hubble constant - a value that has changed by a large factor over the past 100 years.  
The present situation appears much more settled but there are niggling questions and doubts whether we have finally understood the evolution of the galaxies and stars properly.   

Please send your comments to ektalks@yahoo.co.uk