About Me

My photo
Science communication is important in today's technologically advanced society. A good part of the adult community is not science savvy and lacks the background to make sense of rapidly changing technology. My blog attempts to help by publishing articles of general interest in an easy to read and understand format without using mathematics. You can contact me at ektalks@yahoo.co.uk

Tuesday, 24 February 2026

FreshWater Bankruptcy Is Here - Will Soon Be Impossible to Reverse - Is It Already Too Late?

Category:  Community Education

"No water, no life.  No blue, no green.  No ocean, no us"          ... Sylvia Earle 

Freshwater bankruptcy is a state where water systems can no longer be restored to their previous healthier baselines due to over-extraction and pollution.                          ... United Nations


Ten years ago, I had analysed in detail the situation regarding global water crisis.  This feature is encouraged by the recent UN Report on the dire situation of freshwater availability in many regions of the world.

Freshwater is one of the four pillars essential for survival - the others are food, energy and the climate. Humans have adversely impacted all of them - in a big way - and we have been doing this for at least 200 years.  In the beginning, developing countries and areas prone to droughts are affected the most, but it is becoming a global problem and all countries will be severely affected.

Freshwater represents one of the many 'nature services' provided to humans free of charge. Nature's ecosystem has enormous ability to repair themselves and reliably supply us with life-sustaining facilities (Appendix 1 presents the hydrological (water) cycle to explain this process). It is estimated that  nature ecosystem services are worth $125 trillions annually.  The freshwater system, in particular has been most adversely affected and has seen some irreversible damage - for example, depleted & contaminated aquifers, dried wetlands, melted glaciers, land subsidence etc.  We have now reached a stage that these systems can never be stored to their baseline condition. 

This article discusses some of the issues pertaining to freshwater availability.


The amount of freshwater (2.5% of total) on Earth is fixed.  30% of the freshwater is in the ground  - most people use rivers as the source of freshwater but rivers only supply about 2100 km^3, a mere 0.006% of the total freshwater.  The slide shows how water is used by different sectors:

The demand of freshwater has been increasing rapidly for several decades - such demand arises from personal use, agriculture and industries. As the human population and living standards increase, more and more water is demanded, and it is understandable that one reaches limits when dealing with a finite resource like freshwater. The slide shows how water usage had increased during the 20th century.


It is worth looking at some numbers projected for water demand in the future.  With the world population approaching 10 billion in 2050 and  global GDP doubling to 200 trillion US$, it is expected that demand for water-intensive foods such as dairy, eggs and meat will increase significantly. Water demand for manufacturing is also expected to increase by 400% over the year 2000 levels.  OECD projects that global freshwater demand in 2050 will be 55% higher than in year 2000 - that is an extra 2000 km^3 of freshwater to be supplied every year.

Many major renewable freshwater resources (rivers) are at their limit to what they can supply -  Colorado, Ganges, Nile, Tigris-Euphrates and Yellow River are considered 'closed' - all their annually available renewable freshwater is already committed. How the increased demand in the future will be met?

Scarcity, Stress, Crisis and Bankruptcy - What is the difference?  It is common to talk about freshwater scarcity and stress  - we have heard these terms often enough, and particularly in the OECD countries these terms do not make a lot of impression about how serious the problem is.  To describe the situation as a crisis is more potent - a crisis is an emergency/unstable period that is characterized by high stress, danger and uncertainty - but short term. By proper management and immediate actions, a crisis may be reversed - and this is what humans think they have been doing  to solve the global freshwater crisis over the past few decades - unsuccessfully.  
What happens if the crisis never ends?

Bankruptcy is more familiar in context of finance and refers to the situation when an individual cannot meet their financial obligations - outgoings cannot be covered by income and savings. The UN Report uses the term water bankruptcy to describe the current global situation, as it is obvious that now we are not able to replace the amount of freshwater that is being used.  The income (water from rivers and rainwater) is insufficient to meet the demands.  The demands are being met by the withdrawal of ever greater amounts from our savings (groundwater aquifers) that are being depleted at an alarming rate.  Important to realise that ground water takes thousands of years to collect and most groundwater sources are replenished at a rate nowhere near to those required to maintain their current levels.  Also, the situation is irreversible  - once the savings are gone - there will be no way to build them up in a sensible timescale.  Freshwater bankruptcy represents the situation when the income (renewable sources - rivers) is inadequate and the savings (depleted groundwater aquifers) cannot be replenished.  The world cannot live off the savings (groundwater) to cover the shortfalls - things can keep going for a while, but quietly moving toward collapse. 
There is no escaping the conundrum - the world is in a 'post-crisis' state of failure-to-manage.

The Situation is Worse Than It Appears: There are many actors that threaten the amount & quality of future freshwater availability.  A major driver is climate change -  along with increasing pollution - it will have a serious impact on the availability of usable water.  
With worsening climate change, weather and rainfall patterns will change around the world - droughts will become more common in some places and floods in others.

Glaciers are Melting: Glaciers store 30% of freshwater and are rapidly melting - these 'water towers' will in a few decades have much reduced flow affecting at least 2 billion people who rely on the rivers funded by meltwater. Even under the 1.5C scenario (considered too optimistic), 50% of the glaciers are expected to disappear by 2100 with peak melting between 2035 and 2050.  The loss of glaciers is likely to be much more rapid - a 4C rise will result in 83% of the glaciers disappearing by 2100.

Sea Levels are Rising:  Sea levels are rising due to global warming.  This causes higher tides and stronger storm surges pushing larger amount of salty sea water inlands into coastal areas.  
Saltwater intrusion contaminates coastal aquifers rendering the groundwater unsuitable for drinking and irrigation.  Higher sea levels drive saltwater into rivers, estuaries and wetlands damaging freshwater ecosystems and contaminating water sources and weakening infra-structures.  A good example is the delta region in Bangladesh where sea level rise is accompanied by the delta region subsidence amplifying such adverse effects.  Seawater causes arsenic to leach from minerals.  Some 20% of Bangladesh land area has been contaminated with increased salinity and release of arsenic into groundwater aquifers.  This has already affected water supply to 78 million people.  Miami and other delta regions are already being affected by sea level rise as described here. 

Pollution is Increasing:  The increasing levels of chemical, nutrient and bacterial contamination of both surface water and groundwater resources is a major driver of global water scarcity.  Even if the total volume of water stays constant, pollution reduces the amount of available useful water.
Chemical Pollution: Pesticides, fertilizers, industrial waste, untreated sewage and  plastics contaminate the water systems, making the water unsuitable for human consumption or agricultural irrigation.  Additionally, nitrogen and phosphorus in fertilizers trigger algal blooms which deplete oxygen levels and kill aquatic life.
Pollutants leach into groundwater aquifers which makes them unusable for human consumption.

Agriculture Has Been Expanding: To feed the increasing population, focused on even better living standards, requires ever-bigger agricultural sector with correspondingly greater demands on freshwater supplies.  Globally, this has been highly damaging in many ways.  

For example, wetlands (a.k.a. Earth's kidneys) of size of European Union have been drained to create farmland - 35% of global wetlands have been lost during the past 50 years.  Wetlands filter water of pollutants - loss of wetlands stops this natural filtration process leading to increased water contamination.  Wetlands soak water during wet periods and release it during dry season - their loss makes droughts more severe.  Coastal wetlands (mangroves) protect coastal farmlands from storm surges and saltwater intrusions.  Wetlands are important breeding grounds for fish and habitats for pollinators.  Their loss results in reduced fish, bees/insect populations - essential for agriculture.  Loss of wetland undermines agricultural sustainability and food security. 

Freshwater Lakes are Shrinking Rapidly:  Lakes are important reservoirs of freshwater - they hold 0.26% of total water and 87% of the surface freshwater.  Several major freshwater lakes have experienced dramatic water loss in the past 50 years - losses driven by water extraction for agriculture, human consumption and climate-driven evaporation. Here are some examples:
The Aral Sea:  Once the fourth largest inland body of water had completely dried out by 2014.  Its water was diverted to grow cotton.
Lake Chad:  Once one of Africa's largest lakes, it has shrunk by 90% in the last 50 years - from 25000 km^2 in 1963 to 2000 km^2 in 2015 due to reduced rainfall and high population water demand.
Lake Urmia:  Has lost 80% of its volume since 1970s.  The shrinkage is due to damming of rivers, intensive irrigation, and climate change.
The Dead Sea:  The water level in the Dead Sea has been falling by more than one meter per year for the past 50 years due to diversion of the Jordan River and other industrial demands. 
 
Besides increasing demand for freshwater, an important reason for shrinking lakes is climate induced evaporation from their surface.

How to Manage Water Bankruptcy: Before making any decisions about freshwater use, it is important to accept that the world is facing an irreversible, systemic crisis rather than a temporary shortage.  We also have to manage water demand within the new reality that less water is available now that in the past.  
Put it another way:

(1)  We are in deficit:  We are now using more water than the amount ecosystem can replace, and 
(2) The deficit is getting bigger:  Natural systems that supply freshwater have been damaged and continue to be degraded further. It is not possible to supply freshwater at historic levels. 

Good, strong and focussed management is required to avoid the dire consequences confronting humanity.  Let us discuss what may be possible:

1. Overuse and Wastage:  We cannot continue to use water at the current levels.  There is too much wastage with insufficient emphasis on efficient use of water.  

Agriculture and industry use >90% of freshwater and must provide most of the savings.  One can adopt more efficient agricultural practices (flood irrigation is too wasteful), grow less-water-intensive crops, reduce meat consumption (beef production requires 10 to 20 times more water than plant-based grains), minimize food wastage (33 - 40% of food is wasted globally every year).  

On the industrial side, much savings may be made by wastewater recycling, adopting water-efficient technologies like smart sensors, air-based cooling and reducing water leaks.  Renewable energy generation is much less water intensive than fossil fuel or nuclear power plants - saving up to 95% water requirements.  Industries are also very well placed to collect rainwater - as are most house holds. In the urban context, leaking water-supply pipes are a big drain on water resources - the water supply network has long been neglected and must be modernised.

In many areas, water is not priced at its true cost - it is often undervalued and treated as too cheap.  Water is widely subsidized for agricultural use in most parts of the world.  This results in unnecessary wastage in flood irrigation.  While water subsidies are often intended to support food security and rural development, it may no longer be feasible to continue providing them.  Growing crops that are less water intensive and better irrigation practices will be helpful. 


2. Restore Nature-Based Sources:  Almost all freshwater is a gift from the ecosystem to which we have done serious damage.  Imperative that we protect and restore ecosystems like wetlands & soil that store and regulate water. Groundwater aquifers are our 'savings' - we need to safeguard them and maintain their levels - only withdraw water that can be replenished on an annual basis.  It is possible to track groundwater depletion and water quality in real-time.
RIVERS: Pollution, from agricultural run-off (pesticides, excessive fertilizer use), untreated sewage discharge, industrial (chemicals, heavy metals) and urban & household waste (pharmaceuticals, microplastics, tyre-wear particles), has degraded the quality of water in most rivers.  
Rivers tend to follow a natural course - this has been disrupted by building dams, reservoirs, straightening channels.  Large scale deforestation has caused soil erosion and the sedimentation clogs up rivers. In most cases, populations downstream now receive very little water (Colorado river often does not even reach the Gulf of California).  
It is important to monitor and put strict regulations in place to restore the health of the rivers.  It will not be a simple task and big sacrifices in terms of convenience and monetary costs might be needed but will be vital for preventing further degradation of the ecosystem.

3. Political, Social and Ethical Dimensions:  Managing water bankruptcy will be a long painful process.  Unfortunately, people in developing countries are often affected most severely by water crises and suffer immensely.  Developed countries have largely escaped the water scarcity thus far but are beginning to appreciate the worsening situation (see Appendix 2 for situation regarding Colorado River in USA).  Water bankruptcy is also best dealt with at the global level to avoid the high probability of future water-wars and exported pollution through rivers etc.  
In this context, one needs to protect vulnerable groups like small farms, indigenous people, isolated communities and of course people living in drought-prone areas.  Strong political leadership at the global level will be needed to see through progress to solve this urgent problem - by all accounts the problem is only going to get worse in the next few decades.  A good reason for being pessimistic is climate change  - I discuss this in the next section.

4. Climate Change is the One to Watch: With all the good intentions to manage freshwater bankruptcy, we might be frustrated if climate change is not sensibly addressed.  I discuss some of the climate change impacts on freshwater availability.  
Extreme Events:  Each degree rise in global temperature increases water content of the atmosphere by 7%. Warmer atmosphere is also more stormy and one would expect more  intense storms (already evident at 1.5C warming) rather than gentle rain.  Higher rainfall intensity will cause more runoff (water ends up in rivers causing flooding and eventually reaching the oceans) instead of percolating into the soil and recharging aquifers.  This is bad news as climate change is not being managed properly - things could get much worse.
Shifting Weather Patterns:  The world is warming but not uniformly over all regions.  It would be reasonable to expect that weather patterns will change - areas that received lot of rainfall might experience droughts and the other way round.  Also traditionally dry areas may get drier.  All this can have severe effects on how/where rivers flow, aquifer recharging, flooding.  Consequences for freshwater supply can be unpredictable with uncertain agricultural yields.  
Soil is losing more water: As the earth warms, soil loses more water. This not only reduces the store of freshwater (our savings are diminished) but also affects agriculture yields.  Additionally, higher temperatures during summer (beyond about 46C) in tropical regions will be consequential in reducing photosynthetic yields.
Glaciers are Melting Faster: Glaciers hold 69% of the world's freshwater - the melt water provides livelihood for over 2 billion people living downstream in the planes. This is particularly welcome as slow release of water into rivers during the hottest and driest part of the year is vitally important for agriculture. 
With rising temperatures, glaciers are melting faster (also there is less snowfall in many areas).  Quickly melting glaciers might actually increase river flows for a short period of time - may be a couple of decades - but then melt rates will plummet and lack of freshwater availability will be catastrophic in many ways.  Glaciers in Himalayan range, Andes and Alps are all melting faster and share many of the fates described above.  
Other undesirable effects of glacier-melt is that water scarcity will cause populations to extract more groundwater leading to overexploitation and potential desertification.  Water quality will diminish as reduced flow will concentrate pollutants and in fact release previously buried contaminants.  Hydroelectric energy generation will also be impacted adversely.

End Note: this article has addressed issues related to freshwater.  It is quite obvious that the ecosystem does not work in isolation - water, food, climate and energy are all intimately related - and all of them are essential for life.  Humans have been short-sighted not to understand this and over the past 200 years they have mismanaged and destroyed the gift that nature gave them.  We not only face freshwater bankruptcy; climate bankruptcy is not far behind - several of the climate tipping points have already been passed (tipping points are points of no return).  Food production is fundamentally dependent on water, climate and energy - total food production could be affected in the next few decades.  
Unfortunately, the geopolitical situation appears to be heading in a direction that is not  conducive for looking at the solutions in a calm and co-operative manner.  It does not bode well for future generations - the concept of sustainability has not survived mankind's ego.

Thanks for reading ...


Appendix 1:  The Water (Hydrological) Cycle

Notice that snow/ice on glaciers melts to provide additional surface water (rivers).  This is not shown on the diagram.

Appendix 2: Colorado River - A case study

Colorado river water crisis provides a great example of over-exploitation of a natural resource.  Colorado river provides water to 40 million people in seven states in Western USA. The overexploitation of the river has reduced reservoir levels to extremely low levels. 

 


There are many recent reports that discuss Colorado River water crisis, and I quote from a September 2025 report:

Consumptive water use in the Colorado River Basin continues to outpace natural flow. The dwindling reserve stored in reservoirs that has long sustained this shortfall might soon be exhausted. Immediate steps should be taken to reduce current consumptive uses in the Upper and Lower Basins ... The entire basin is in agreement that we must balance our water use with the natural supply. Despite laudable efforts, we are currently not doing so, at least in part because the hydrology has been unforgiving. Unfortunately, however, this is the hydrology we must plan for, with the knowledge that the next few years could be even worse. While inflows and uses during the next year cannot be predicted with certainty, using the past year as a proxy for the coming year makes for prudent, conservative planning. 

Obviously, developed countries are also not immune to water bankruptcy!

Thursday, 12 February 2026

Large Scale Space Colonization by Humans Is a Delusion; Survival Colonies on Earth and Autonomous Robotic Space Exploration (Robotic AI) are the Sensible & Correct Options

“Evenafter a nuclear apocalypse, Earth would be paradise compared to Mars.”

Human Space colonization is a hotly debated topic with some unrealistic goals touted by eminent scientists and a few very rich individuals.  Their argument rests on the need for humans to set up large scale independent colonies (of up to a million people) in space (Mars is the preferred choice) to ensure survival of sapiens in case of a catastrophic event wiping us out on the Earth.  This type of thinking is simply delusional because it ignores established scientific facts about human physiology/psychology, and of course the current technological capabilities.  It is also highly damaging/restrictive to the development of alternate sensible projects to ensure the survival of humans on Earth in face of existential threats. The time scale will be in centuries if it is required that the colonies are self-supporting without any help from the earth.  With ever escalating number of existential threats, the next 100 years are probably going to be the most crucial period for 'survival planning' and this can only be via earth-based facilities.  To establish a space colony is astronomically expensive - the first human mission to Mars is expected to cost 500 billion dollars. To set up a large scale colony could cost up to 1000 trillion dollars (according to Google Search).  One hundred survival colonies on Earth, capable to accommodate one million people, could be arranged for a tiny fraction of this cost.  Economically, a large scale space colonization programme without a parallel development of terrestrial colonies is sheer madness. So far it looks like an extravagant vanity project.

Before embarking on a project that costs hundreds of trillion of dollars, it is imperative that one analyses its aims and objectives, feasibility and alternate solutions.  Unfortunately, this has not been the case here - we still talk of terraforming Mars!  The irony is that humans are actively destroying the Earth ( the only planet that is ideally suited for supporting life) without trying to control their actions, and want to undertake a 200 days one-way journey to Mars - a planet that is absolutely unsuited for human survival.  Where is the logic in this - but when it comes to big decisions, we are behaving as humans normally do - irrationally. 

Ten years ago, I had discussed space colonization in my feature here and analysed the situation to conclude that space is a good frontier for space tourism, scientific experiments and may be, in the long run, for  exploiting its mineral wealth. Not much has changed since then except that artificial intelligence (AI) has made great strides and is expected to reach human level intelligence (AGI) in the very near future.  

In this feature, I wish to address two subjects:

1.  Planning for earth-based survival strategies

2.  Exploration of space is best left to artificial intelligence or robotic AI. 


Earth-based Survival Strategies:  One needs to understand what the quote at the beginning is trying to convey -  even after nuclear apocalypse, Earth will be paradise compared to MarsThe main  premiss of space colonisation advocates is to ensure that sapiens survive existential threats - no problem with that.  The difficulty is with the proposed solution. Let us first look at the threats one might be talking about - these are threats that will result in almost complete destruction of human population on earth and also extremely widespread collapse of the biosphere as we know it.  

Existential threats come in two forms:  

Natural catastrophes like a super volcano, large asteroid or comet strike, natural pandemics.  The first two, if big enough, could  result in most living creatures dying and likely  also cause havoc with the life-support systems on Earth for a decade or so. While the bio-diversity could take millions of years to recover, it is expected that life-support systems could recover over few decades.

Anthropogenic Catastrophes are disastrous events due to human activity causing human extinction and permanent irreversible destruction of civilization's potential. Nuclear war, synthetic pathogens, advanced AI, climate change leading to ecological collapse are a few examples where humans have increased the risk of existential collapse by orders of magnitude.  Many of the threats are potent immediate threats unlike the natural catastrophes that visit the earth once in long tine of the order of millions of years.

A catastrophic event (natural or anthropogenic) will shatter but still leave the basic life-support potential of the earth intact.  Not 100% of life will be destroyed and if proper planning has been put in place then it is quite likely that a small fraction of human population will survive and recover in due course.  Also worth a note is the fact that any life-support route from earth to planetary colonies will be completely severed and may not be reinstated for decades.  Unless the space colonies are self-sustaining and are completely independent of earth resources in terms of food, energy, machinery and other life-sustaining essentials, they will perish very quickly. The time scale for planetary colonies to reach self-sufficiency independent of the earth is counted in centuries and for a few hundred years, the only way to ensure survival of the human race is by planning and constructing a large number (may be one hundred) of self supporting colonies of 10,000 people each.  

Such colonies may be in pre-existing caves, underground or even underwater in the oceans.  They can communicate with each other and provide valuable mutual support.

Unfortunately, we are too heavily occupied with vanity projects - there is no sign of any covert discussions in this regard. 

Interestingly, several projects have been undertaken in the recent past when researchers have tried to live in simulated structures with conditions as would be met in colonies on Moon or Mars.  These missions studied the effects of isolation and confinement on human psychology, physiology and team dynamics. The results have not been good and point to the difficulty of successful colonization of planets. The simulations only had a small number of residents and did not include reduced gravity, atmospheric pressure and harmful cosmic radiation. 

End Note:  Over the past decade, many objections have been raised to the idea of space colonization as the only solution to ensure human survival.  The world now is facing more existential threats than ever before; it is imperative that the world governments should come together and develop a coherent strategy for building earth-based colonies that can withstand the catastrophic event(s).  The costs are not significant if the projects are planned and completed over a 10 to 20 year period.  

Space exploration is a valuable endeavour and satisfies many of the human traits that made us the most powerful species on earth.  The fruits of space exploration are many - weather prediction, communications and navigation have benefited enormously.  The future looks bright for space exploration - the only problem is the space debris - the large number of small (sometimes not so small) fragments that are accumulating in the lower earth orbit and have the potential of causing untold damage to the human societies as we know them.  This is a topic worth investigating further.

Thanks for reading ...

Monday, 9 February 2026

Global Risk Report & The Doomsday Clock - The World Is Projected to Get Even Riskier Over the Next Decade

 It is a good idea to keep an eye on the health of the world on an annual basis.  It might even be possible to do something about the risks identified.  

The world economic forum (WEF) asks >1000 experts for their views and has been publishing the Global Risk Reports (GRR) annually since 2006. GRR is a thoughtful document assessing where the world is now and, where it is heading over the next 10 years.  It tries to identify impacts of different risks on global welfare. 

GRR is an extensive document over 100 pages long - in this article, my aim is to provide a summary of the report's findings - how is the world today and what to expect at the end of the next 10 years. 

The Doomsday Clock (DDC) is monitored by the Science and Security Board (SSB) of the Bulletin of Atomic Scientists (BAS).  Founded in 1945, DDC was set up to assess the danger posed by nuclear weapons.  Currently, SSB includes globally recognised experts in climate change, nuclear risks & disruptive technologies such as artificial intelligence (AI).  Midnight on the clock represents a catastrophe when humans would have rendered the world uninhabitable.  The DDC on 27th January 2026 was set at 23:58:35, merely 85 seconds to midnight - closest it has ever been to catastrophe. 

The doomsday clock, like a countdown, is intended to reflect the level of continuous danger in which mankind lives in the nuclear age along with new existential threats of climate change, bioterrorism and AI. Its setting changes mostly with the perceived threat of nuclear war (at least until recently, threats from other causes were not considered catastrophic). 

I shall discuss DDC first and then analyse GRR that I consider of greater relevance to the human civilisation. 

Doomsday Clock (DDC):  With all the uncertainties about the future direction of nuclear wars in late 1940s, DDC  made total sense.  Hydrogen bombs, much more powerful than the 1945 fission bombs used in Japan, were getting introduced to the nuclear arsenal and the geopolitical tensions were high.  DDC settings fluctuated (see slide) over the years mainly reflecting the  possibility of a nuclear conflict - although the concept of mutually assured destruction (MAD) makes nuclear war less likely, even though the consequences of a nuclear war will ensure the end of human civilisation as we know it.

In the slide, the vertical axes represent the time - the left axis shows minutes from midnight while the right hand axis shows the actual clock reading.

Slide 1


IMHO, various treaties to limit the size of the nuclear arsenal etc. are academic - there are enough bombs to destroy the world many times over.  There is aways a risk that nuclear war is started unintentionally with some  misunderstanding or even through a technical fault.  
A more worrying development is the emergence of a political class of self-centred leaders who are not guided by any ideology but by an intense desire to dominate the world and leave a personal legacy behind.  The geo-political tensions that such behaviours create increase the risk of conflicts and this is what the DDC is also highlighting in 2026. 

It appears to me that in the medium term of a few decades, threats posed by climate change, bioterrorism and AI will become as potent as the threat of nuclear holocaust. The new risk of climate change and AI will be like the genie set free and humans will be helpless to control the damage that it can cause - all the sign are that we are not paying attention just now.  Unlike nuclear conflicts, risks from climate change and advanced AI are outside human control and over the long run they would pose a much more serious threat to the survival of human civilisation. 

This might be a good time to renormalise the setting on the doomsday clock to accommodate the new risks.  A renormalisation of DDC will also help to  move away from a permanent state of extreme threat that raises the possibility of people just ignoring the warning.

The Global Risk Report (GRR):  For the 2026 GRR, more than 1300 global leaders and experts were consulted for their views.  The world is a complex place with abundance of data (both reliable and fake) - navigating your way in the mountain of information is not an easy ask.  Bonafede experts offer our best hope to understand what is happening around us  - and opinions of a good number of such experts must carry reasonable credibility in identifying relevant risk factors. 

The world is also changing rapidly and to project the current situations to ten years in future may be a fool's errand. However, long-term forecasting helps identify trends such as technological disruption or climate change, and allows one to develop mitigation strategies to minimise damage.  At least that is the hope - personal biases, misinformation, vested interests etc. can cause uncertainty, paralysis and poor planning.  Climate change is a case in point where even though science and empirical information have been calling out for action, not enough is being done to address the risks.  In fact, expert opinion in GRR projects that 5 of the 10 top risks in 10 years will be environmental risks. 

Summary of GRR Findings: The five main risk categories (see Slide 2) had subcategories - 33 in total which the experts ranked in terms of the level of risk they present (now and in ten years time).  I have presented the top 10 risks in the slides below and refer you to the full report for more details.  

 Slide 2


Slide 3


Slide 4


Slide 5


Discussion:  Comparing slides 2 and 3, one notices that environmental, societal and technological risks dominate the 10-year outlook - in sharp contrast to the current outlook dominated by geopolitical concerns.  This is understandable with ongoing wars in Ukraine, Gaza and other places, and these are concerning for global risk.  But then some level of hostility is always present and it is reasonable to expect that this situation of localised wars will continue at one place or the other over the next 10-year (of course, one excludes the threat of nuclear war as mutually assured destruction idea largely precludes it).  What is different in the 10 years horizon is the rapidly escalating threat of environmental risks as well as the uncertainty about the development of advanced AI.  Societies are also getting more and more fragmented. 
Slide 5 summarises the situation very well - the experts clearly feel that the world will be a much riskier place in 10 years time with environmental degradation (slide 4) becoming by far the greatest threat.
One hopes that leaders of the world will pay attention to this important report and start to work in a co-operative way to control (and reverse) the damage to the environmental that is happening.

I feel pessimistic about things changing much over the next decade - for the simple reason that the two fundamental problems of global population size (still increasing albeit more slowly) and the global growth model of gross domestic product (GDP) are elephants in the room that nobody wants to talk about.  The world is being driven to go past many tipping points which, once crossed, make it very very difficult to neutralize.  Also, now the time to act may be just getting too late - results take time and we do not have this luxury any more.

Interesting times ahead ...

Thanks for reading.

Saturday, 7 February 2026

Serendipity (Part 2) - The Antimatter Particle_ Positron - A Great Example of Serendipitous Discovery

Serendipity is the faculty of making fortunate and unexpected discoveries by accident. Part 1 may be reached here.

How the existence of antimatter was theoretically predicted and experimentally observed makes a great story.  Serendipity played a role at all stages of the discovery.  

Prediction of Antimatter: In 1927 Paul Dirac was trying to include relativistic effects in the formulation of the non-relativistic quantum theory as proposed by Schrodinger and Heisenberg.  Their theory needed to introduce some ad hoc properties like particle spin and magnetic moments which were difficult to justify (for an electron to have spin one-half, its surface would need to rotate at speeds greater than the speed of light!!).  

Dirac found that his theory was eminently successful in explaining the origin of particle spin and magnetic moments of the electron, except that there was a problem.  

The Problem & Dirac's Solution: Dirac's theory required symmetry in solutions such that a particle (for example an electron) with positive energy must have a twin with a negative energy - there was no escaping the situation. Real electrons have positive kinetic energy and that is fine, but the concept of negative energy states was a problem. Positive energy electrons will fall to negative energy states and make the system unphysical.

This was unexpected and unsought-for result of Dirac's theory. Dirac solved this problem by postulating 'Dirac-sea' that was to say that all the negative energy states are already full and positive energy electrons have no where to fall. Any holes in the negative energy sea may diffuse around and behave like a positive electron (or a positron).  If an electron falls into the hole then the hole is filled with the energy difference released as a packet of energy (the electron and a positron no longer exist -  annihilation of matter).  Later developments in theories in the form of Quantum Electrodynamics (QED) have done away with the need to introduce the negative energy sea.  

Dirac showed sagacity in dealing with the unexpected result and predicted the existence of antimatter (positron and other  antiparticles).  Dirac was awarded Nobel Prize in 1933 for predicting the existence of the positron.  In many surveys, Paul Dirac is ranked as the fourth most important scientist behind Albert Einstein, Isaac Newton and James Clerk Maxwell. I have had the pleasure of giving talks on the three luminaries and they are available to read here by clicking on their names.  In the Appendix, I give a brief introduction to Paul Dirac  - I am sure you have not met a person of such unusual character before. 

Experimental Observation of the Positron: Carl Anderson is credited with the observation of the positron in 1932 - Anderson was awarded Nobel Prize in 1936 for his discovery of the positron.  The story is really quite fascinating in that other scientists, even before Dirac predicted it, had observed positrons but had failed to see the significance of their measurements.  They either ignored it or tried to give bogus explanations. As Pascal had said - 'Chance only favours the prepared mind'.

Before I discuss the observation of the positron, let me digress and describe briefly how particles are experimentally detected; the detector is called a cloud chamber invented by CTR Wilson (Nobel Prize 1927), a device that makes the path of charged particles passing through the chamber visible; the path may be photographed for analysis.  When placed in a magnetic field, the path bends in an arc which allows one to calculate the velocity, charge  and mass of the particle.  

Positrons are created when high energy cosmic ray particles collide with nuclei in a medium.

Missed Opportunities: Several researchers missed the chance of detecting positrons:

1. In 1928, Dmitri Skobeltsyn had observed tracks in a cloud chamber that looked like electron tracks but were curved in the opposite direction in the magnetic field.  He chose to ignore them.

2.  In 1930, Chung-Yao Chao observed positron track but did not attribute them to a positive charged particle.  Chao was a fellow student of Carl Anderson, and Anderson later acknowledged that his work was inspired by Chao.

3. In April 1931, a few months before Anderson's discovery, Frederic and Irene Joliot-Curie missed the opportunity to discover the positron in their experiment using a Wilson cloud chamber.  The Curies did not use cosmic rays but were bombarding aluminium and boron with alpha particles - they observed electron tracks that curved in the wrong direction indicating a positively charged electron (positron).   However, they interpreted the tracks as being due to electrons that have been scattered back into the equipment! 

The story of Irene and Frederic Joliot-Curie is a fascinating one and may be read here. The Curies had the world's most powerful alpha source and were very well placed for making novel discoveries; they also appear to have a habit of missing out on important observations  - they could have won three extra Nobel Prizes.  

Besides the positron, the Curies misinterpreted their results and missed the discovery of the neutron for which James Chadwick was awarded Nobel Prize in 1935. They also ignored the presence of lanthanum after  bombarding uranium with alpha particles and thus missed out on the observation of nuclear fission - Otto Hahn and Fritz Strassmann repeated their experiment in 1938 - Otto Hahn was awarded 1944 Nobel Prize for this.

The Curies did win the 1935 Nobel Prize for discovering induced radioactivity showing that radioactive elements may be produced in the laboratory (basis for positron emission tomography (PET) scanners in medical diagnosis).

4. In late 1932, Blackett and Occhialini confirmed the existence of the positron using a cloud chamber equipped with Geiger counters - a much more efficient set up.  

Their experiment also observed the production of an electron-positron pair from energetic cosmic gamma rays thus confirming conversion of energy into matter as predicted by Einstein's famous equation E = mc2

Blackett and Occhialini published their work a few months after Carl Anderson and missed out on being the first to report the discovery of the positron.  Blackett did win the 1948 Nobel Prize for his work on cloud chambers and cosmic radiation.

Anderson's Discovery of the Positron: 

This is a picture of one of the first positron tracks observed by Anderson in 1932. It was taken in a cloud chamber in the presence of a magnetic field of 2.4 Tesla pointing into the paper (so the path of a positively charged particle travelling from the bottom of the picture is curved to the left)The cloud chamber (17x17x3 cm) contained a gas supersaturated with water vapour. In the presence of a charged particle (such as a positron), the water vapour condenses into droplets - these droplets mark out the path of the particle. 
The band across the middle is a lead plate, 6 mm thick, which absorbs some of the energy of the particle and slows it down. The radius of curvature of the track above the plate is smaller than that below. This means that the particle is travelling more slowly (23 MeV) above the plate than below it (63 MeV), and hence it must be travelling upwards. From the direction in which the path curves one can deduce that the particle is positively charged. That it is a positron and not a proton can be deduced from the long range of the upper track - a proton would have come to rest in a much shorter distance (~5 mm) 
Carl Anderson won the 1936 Nobel Prize for Physics for this discovery.  

In 1937, Carl Anderson with his student Seth Neddermeyer observed a totally unexpected track in the cloud chamber - the track was made by the passage of a particle of mass 207 times the electron, which had a negative charge exactly the same as that of an electron. This is yet another example of serendipitous discovery  - existence of a muon was absolutely astonishing - it was neither predicted by any theory and nobody was looking for it! When informed about the muon - I. I. Rabi had quipped - 'Who ordered it?' - Muon had no place in the current physics theories. Despite being an unexpected addition, the muon served as the first clue regarding the existence of a new family of elementary particles helping physicists build the Standard Model.

Discussion: the first half of the 20th century was a golden era for physics (and also chemistry) with major breakthroughs in theoretical insights in the form of theory of relativity and quantum mechanics. As is common when a paradigm shift happens in a field of study, a flurry of new ideas and empirical evidence come in quick succession and many of the outstanding problems find resolution.  It is also evident that many discoveries were accidental and serendipity played a significant role.  
The period 1900 to 1950 was such a time when one could say there was an awakening in physical sciences.  
It may be apparent from the presentation above that important new discoveries were forthcoming rapidly and such was the merit of the work that many Nobel Prizes were awarded with only 2 or 3 years gap between discovery and award.  Additionally, many researchers involved were very young - some had only finished their PhD when they did the Nobel Prize winning work.  This is highly unusual as the average age for such work is 40 years and the Nobel Award comes on average after 20 years (at age 60 years) of the seminal work. I had analysed this situation and refer you to my feature here.

Essentially, physicist who were born in 1880 to 1900 were fortunate to be completing their postgraduate degrees in the 1910 to 1930 when many opportunities of making great discoveries became available.  Historically, this happens when there is a paradigm shift - one recent example is the arrival of portable computers (laptops) in mid 1970s.  I reproduce the data below

Most of these innovators are multi-billionaires and were at the right place at the right time.  Notice that most were born between 1947 and 1965 - a short span of 18 years. 
We find a similar trend in physics around 1925 to 1940 when great discoveries were made by those born in 1880s to 1900. The slide highlights the situation: Red shows the age distribution of the laureates at the time of award for all categories.  Blue shows this distribution (numbers multiplied by 20 for clarity) for Nobel Prizes in physics between 1921 and 1940.
Considering that the age at PhD is about 26 years, the data confirms the points highlighted in the above discussion. For a more detailed discussion - See (1,2).

APPENDIX - Paul Dirac (b. - 1902 Bristol, England
                                         d. - 1984 Tallahasse, Florida)


Niels Bohr called Dirac 'a complete logical genius' and also the 'strangest man' who had ever visited his institute. 
Dirac was known for his precise and taciturn nature.  His colleagues defined a unit called a 'dirac', which was one word per hour!  According to Dirac, his father who was originally from Switzerland wanted his children to  speak only French at home so that they might learn the language; Dirac found that  difficult and decided to remain silent.  
In 1937, Dirac married Margit Wigner, sister of famous physicist Eugene Wigner.  He would introduce Margit to visitors as "Allow me to present Wigner's sister, who is now my wife."
Dirac met Richard Feynman in a conference - after a period of silence, Dirac asked Feynman, "I have an equation, do you have one too?"
According to his biographer, Dirac suffered agonies if forced into socializing or small talk.
Dirac's views on poetry: 'The aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way'

Dirac was absolutely blunt in his comments.  Niels Bohr was finding it difficult to finish a sentence in his paper - Dirac told him that 'I was taught at school never to start a sentence without knowing the end of it'.  

Undoubtedly, Dirac was regarded a 'strange' genius by his contemporaries.  
Einstein remarked about one of Dirac's paper, 'I am toiling over Dirac.  This balancing on the dizzying path between genius and madness is awful'.  On another occasion, 'I don't understand the details of Dirac at all'.  

For Dirac's contributions to science - there were many - he was active until his death in 1984,  I refer you to his biography in Wiki for details.

Thanks for reading ...


Wednesday, 4 February 2026

Good Science, Bad Science, PseudoScience, Dogma, Common Sense, Religion and the Laws of Nature

 A defining characteristic of Homo sapiens is to make sense of things around them - the 'how?' and 'why?' of what is happening.  Wisdom gained from such curiosity driven enquiries - not equally possessed by other species - has helped us to become the most powerful species on earth. In two previous features (1, 2), I had discussed how our understanding of the laws of nature progressively improved over time - this process continues unabated in the present era.  Help from emerging technologies (microscopes, telescopes, digital devices etc.) has greatly increased the rate at which new understanding/insights, into how nature works, are being acquired.  

Of course, path to progress is never a straight one - it is full of detours, blind alleys and occasional blunders; due to inadequate means of observations, lack of direction and, not to be underestimated, less than perfect human psyche. 

Generally, our understanding of the laws of nature has improved with time - by building on previous wisdom and by further exploration of some remaining unexplained observations (accumulation of more empirical evidence). Existing laws are replaced by new to provide a more satisfactory explanation of what we observe.  This is the scientific method that has helped to place our understanding of the universe on a much firmer footing.  The work in science is never finished - older theories shall be modified or even discarded in the light of new evidence - nothing is 100%!

Attempts to understand and make sense of nature by our ancestors were hampered by a non-existence knowledge base and absence of any framework to guide them.  It is reasonable that early theories would appear simple minded and absolutely inadequate to us in the 21st century.  Nature is extremely complex and does not lend itself to simple interpretations.  It has been a slow process and only about 400 years ago, we arrived at a formal protocol for studying science - the scientific method - with emphasis on honesty, impartiality, meticulous planning and analysis. No matter how good a theory appears to work today, in future it may be modified or completely replaced by a better theory.    

How Human Psyche Influences Scientific Research:  Ideally, scientific investigations must be carried out in an objective manner as prescribed by the scientific method. In real life, human traits like cognitive biases, emotions, social and cultural dynamics profoundly affect planning, execution and interpretation of the scientific investigation.  It may be useful to look into this aspect in a bit more detail:

We pay more attention to data that support our existing ideas and overlook contradictory evidence (Confirmation/Expectation Bias).  Also, information supporting recent studies may be more appealing and readily accepted while other equally important information is ignored (Availability bias). We might be unduly influenced by the thinking of established authorities in the field (Authority Bias).  Additionally, one can suppress own dissenting opinions due to the desire for harmony and consensus with others (Groupthink). 

To curb these cognitive biases, one needs to follow transparent, open practices.  Reproducibility of results is an important check and hardly any empirical evidence is now certified valid without reasonable corroboration.

This brings us to explain what Good Science is.

 Good Science:  The purpose of science is to study the natural world and create a proper understanding of its working.  Good science studies the natural world by using observations/measurements that are impartial, verifiable, free from personal bias, reproducible and curiosity-driven.  Theories and hypotheses must not only explain the full empirical evidence but also make new predictions testable in a way that they could potentially be proven false - theory is reliable but must be open to further scrutiny.

An easy-to-read reference of some important landmark investigations that represent good science may be worth looking at.  A brief introduction to one such experiment follows:

The Rutherford Alpha Scattering Experiment:  Alpha particles were fired at a thin gold foil.  While most alphas passed straight through, a small fraction of alphas was observed to be unexpectedly deflected backwards at large angles.  This could only be explained if the positive charge and mass in gold atoms were concentrated in a tiny space (the nucleus) rather than distributed over the whole volume of the atom. The results replaced the plum-pudding model.

Bad Science: Research, while intending to be scientific, is flawed in its design, execution, or analysis.  Results from such studies would often be incorrect, unreproducible and/or misleading. They have the potential of doing much harm to the progress of scientific endeavour; peer-review of research before publication is a powerful way of preventing wider circulation of results from such bad science studies. I give an example that shows how bad science can happen and the checks and protocol of the scientific method can prevent misinformation and chaos that bad science is capable of:

Faster Than Light Neutrinos: Einstein's special theory of relativity says that nothing can travel faster than light - this is a core theory in physics.  In 2011, researchers sent a beam of neutrinos 730 km from CERN to the OPERA detector and measured their time of flight, and found that neutrino travelled slightly faster than light.  After much scrutiny and in view of scepticism of the result by many researchers, the finding were only reported in arXiv.org - a non-peer-reviewed open access archive. Subsequent investigations revealed that two pieces of equipment were faulty and the timing measurements were incorrect.  Neutrinos did not actually travel faster than light.

Pseudoscience (aka Fake Science):  is something that looks like science, but is somehow false, misleading, or unproven. It certainly does not follow the scientific method.  Pseudoscience suffers from lack of reproducibility, ignores contradictory evidence (cherry-picking), rely on cognitive bias, and is not peer-reviewed.  

I refer you to my feature on pseudoscience for a more detailed description.  Misinformation via pseudoscience is becoming a major problem in today's world due to the rise of profit-seeking big business (tobacco and food industries) and interest groups driven by some ideologies (climate-change deniers).  Social media plays an important role in giving free unchecked publicity to such pseudoscientists as their pronouncements can reach us without going through peer-review and not meeting the criteria of good science required by the scientific method.  

Natural world operates on a complex set of rules - pseudoscience flourishes by providing  simple explanations that many might find easier to accept. Health-related pseudoscience offers false hope or easy solutions to complex problems, sometimes leading to dangerous (but avoidable) health consequences. 

Wiki has a long list of pseudoscience examples and is worth a look.  I list a few:  AstrologyModern Flat-Earth Beliefs, Climate-change denial, Phrenology, Palmistry, and many more.

Dogma:  

" I would rather have a question I cannot answer than an answer I cannot question"                                           ... Richard Feynman

Dogma refer to principles that are accepted, without question, as undeniably true and impossible to dispute, contradict, or doubt. They are antithesis to good science as dogma resist being tested by available new evidence. They are very difficult to change.

Generally, religion is the first system that comes to mind when we think about dogma (e.g. Trinity or Mary's Immaculate Conception in Christianity, Islam), but dogma encompasses rigidly held ideas which can not be questioned  in any system - be it politics (totalitarianism, Marxism, national sovereignty) or science (geocentric model of the universe, several of Aristotle's theories, Luminiferous Aether,, Bloodletting - medical science). The situation in science has improved greatly since the adoption of the scientific method of inquiry - it is possible to challenge existing rules/theories and replace them by new if found insufficient.  This has been very effective.

By suppressing questioning, dogmatism excludes possibility of acquiring better understanding and challenging mistakes.  This does not serve us well.   

You might like:  10 signs of dogmatism 

Why does dogma arise?  For this, we need to have an understanding of the world that humans live in - it is a complex world and to deal with it, we have many evolutionary attributes or cognitive biases - short-cuts to conserve energy, time and effort.  Humans need certainty (confirmation bias);  we need to live in societies, feel in control and still have an individual identity.  We look for security  - a rigid dogmatic regime provides this by simplifying things.  

The downside of dogma is that it is easily exploited by those in authority and used as a means of control. This is very well expressed in the following slide (extract from Big Ideas

It is not easy to change dogmas - you have full confidence in the validity of what you currently believe.  It may be that you arrived there through brainwashing, fear or coercion but that is where you are. 

To combat dogmatism, challenge your own beliefs, be open-minded, create cognitive flexibility, seek new friends, practise mindfulness.  It is difficult!  


Common Sense & Religion:  I refer to the two features that I have published for this topic  - Click here for Part 1 and here for Part 2. 


End Note:  The various sections in this article highlight the contradictions of our present day societies.  Good science has opened new vistas with wonderful technological achievements that has 'improved' our lives in a big way.  We also have a much better understanding of how the world works with new research addressing many of the remaining unanswered questions.  One would think that in the current environment, irrational and retrograde thinking will subside.  The evidence available just now does not support this view - the world is full of pseudoscientific and dogmatic thinking.  Why is it so?  May be the answer lies in the way sapiens have evolved - our executive function brain (the prefrontal cortex) and the limbic brain always seem to be at loggerheads.  The limbic brain controls the fast-acting, emotional and survival-driven tendencies - with finite energy resources and time constraints, limbic brain is still the main actor in our day-to-day functioning.  As somebody aptly said;

"Physically and cognitively, we are hunter-gatherers in Hugo Boss suits"


Thanks for reading ...