Covid 19: Has Science Run Out of Ideas?

 by Suranya Aiyar

September 2020

Contemporary epidemiologists portray their field as having emerged from a fog of mathematical illiteracy politely termed “descriptive” and “qualitative”, to its present “mathematical” and “quantitative” form (1, 2). Credit for this claimed shift to a mathematical approach is given to the British physician and amateur mathematician, Sir Ronald Ross, who won the Nobel Award in 1902 for demonstrating how malaria spreads through mosquitoes. Ross carried out his mosquito researches while serving in British India.

But epidemiology has always been mathematical, studying the population-wide spread of disease in a numerical and statistical way. For a century before Ross, people like Daniel Bernoulli and William Farr had been applying mathematics to study the spread of infectious disease (18). At the time of Ross, all his contemporaries in epidemiology were expressing their ideas with statistical and mathematical analysis; there was Major Greenwood, a senior government epidemiologist, and John Brownlee, a mathematician and physician. Ross, Greenwood and Brownlee all consulted with and applied the work of Karl Pearson, who is considered to be the founder of modern mathematical probability analysis (9).

The problem that Ross and other epidemiologists kept coming up against was how to explain the seemingly random appearance and disappearance of disease outbreaks in populations. Everyone agreed that epidemics rose and fell in a wave pattern that could be more or less accurately expressed in mathematical equations, but the question was: what was the principle underlying this observed pattern.

Ross, and the line of thinking that following him, argued that the rise and fall of epidemics could be entirely explained by the relative number of those infected and recovered in a population at any given time. This is what came to be called the “SIR Model”, where “S” refers to those susceptible, “I “to those infected and “R” to those recovered. The other view, articulated by Brownlee, was that while mathematical probability distributions could explain how an infection, once introduced in a fixed population, will be “distributed” or spread, they did not explain the variation over time in the number of cases during an epidemic, or the causes of its decay.

Brownlee said that if you assume that epidemics die out owing to “the exhaustion of susceptible persons among the population” then, mathematically, you should get an epidemic curve that falls faster than it rises (3). But, he argued, the observed curves in epidemics do not have this shape. Instead, they show either the symmetric curves (also known as the “bell curve”) of William Farr, in which the rate of rise and fall of outbreaks is the same; or an asymmetric curve, as developed by Pearson, in which the rate of rise is faster than the rate of fall. He said that this showed that epidemics die out even when susceptibles remain in the population; a phenomenon that is not explained by the curves themselves. Therefore, according to Brownlee, the explanation must lie elsewhere than in the mathematics of these curves. He goes on to speculate that the answer may lie in a decline in the infectivity of the pathogen or a change in the susceptibility of the population for some cause that was not as yet known.  

In response some years later, Ross claimed that he had developed on his equations so that they expressed an epidemic curve that was roughly symmetrical, hence allowing for a fall of epidemics even when many susceptibles remained in the population (4). In these equations, Ross treats the epidemic curve as a function over time of population dynamics – nativity, mortality, immigration and emigration - and the proportions of those infected and recovered. From these equations Ross derives a constant, “c”, which denotes the number of persons that each infected individual infects or reinfects in unit of time. This constant is an early ancestor of the quantity “R”, the so-called Reproduction Number, that we are all so familiar with in epidemiology today.

But, and this is where Ross’s approach is so different to the epidemiologists’ of today, the process of reasoning did not stop at these equations. He says that the equations present a “tentative” theory based on “probable assumptions” that would need to be tested against observations in actual epidemics. In his detailed series of papers on “a priori Pathometry” (his term for his theory of epidemics) Ross describes his process as consisting of three steps: first, the formulation of “a priori” assumptions or “knowledge of the causes” of the rise and fall of epidemics; next, the construction of differential equations “on that supposition” and the setting out of the logical consequences of their application, and finally, he says that his results so obtained would have to be tested “by comparing them with observed statistics” (5).

Ross did not see his “a priori” work as replacing the “a posteriori” work of comparison with statistics from actual outbreaks in discovering the laws that underlie epidemics. Rather, he saw them as complementing one another (6). This is in contrast to the approach of epidemiologists today, which I have discussed elsewhere, who use data to “fit” their models by adjusting the quantities assigned to variables or constants in them, and not to test the model’s assumptions themselves (7).

It is true that Ross accused some his detractors of being unmathematical, but contemporary epidemiologists have taken these remarks out of context and used them to make unwarranted claims of his epidemiology being more mathematical than that of others such as, say Brownlee, who was a trained mathematician, unlike Ross.

The accusations of mathematical-denial by Ross arose out of a pitched battle between him and British colonial officers in India over measures to combat malaria. Ross repeatedly accused his serving compatriots in India of not doing enough for mosquito eradication, which led to a hilarious exchange of trenchant letters-to-the-editor between them on the pages of such august publications as the British Medical Journal and the Indian Medical Gazette (6, 8).

Ross’s comments in the course of this dispute have had a huge impact on the way contemporary epidemiologists see themselves as the sole arbiters of the mathematics of epidemics. The story told by contemporary epidemiologists is that Ross faced resistance because the logic of his mathematical analysis was not understood owing to a lack of mathematical training or aptitude for “quantitative” thinking inside and outside the epidemiological community (1, 2). But nothing could be further from the truth. Not only was this not true in general - I have shown above that epidemiology was always mathematical - but this was also not true on the specific issue of Ross’s prescriptions for mosquito eradication.

In the early 1900s, the deployment of “mosquito brigades”, draining of canals, filling of marshlands and other mosquito control measures were carried out across the world in British colonies following the advice of Ross. There was no opposition to Ross’s theoretical ideas about mosquito control. The controversy was over the feasibility and effectiveness of his proposed measures for mosquito eradication in India; a controversy that is strikingly reminiscent of the debates of today over lockdown and other containment measures for Covid.

Ross was particularly enraged by the Report of the Mian Mir Commission published from India in late 1909 that had assessed the mosquito eradication work in various places there using Ross’s methods since the early 1900s and concluded that it had not been able to control malaria (10). The Mian Mir Commission came to this conclusion not on grounds of scientific dispute with Ross, but of the practicality and expense of his measures, given the conditions in India. It was Ross who, unable or unwilling to see the practical argument being made, chose to interpret this as a “thesis” that mosquito reduction was useless, and that his detractors failed to see that “Epidemiology is in fact a mathematical subject” (29, 6). 

Ross’s papers on malaria present the reader with a neat picture; his mathematical symbols, equations and arguments following precisely one after the other to their logical conclusion. But as the reader peruses the Mian Mir Report this elegant and well-ordered landscape erupts with all the hurly-burly of real life, at least of life in India, with scenes and impediments that are only too familiar, understandable and, once the rose tinted glasses of the “scientific temper” are lifted, blindingly obvious to any Indian, even 110 years later!

Follow me, dear reader, through the pages of the Mian Mir Report. Here you have endemically clogged open drains in a cantonment that was built years ago on flat land; there you have buffaloes in the Sadar Bazar that make pools as they wallow in the waters of the canal; yonder are rain-fed tanks, that villagers want to keep, formed in pits from digging for earth to build huts in the flat lands so characteristic of the Northern plains of India.

The collision of reality with mathematical modelling repeats over and over: an unexpected flood one monsoon that made nothing of all the cantonment’s diligent exertions in pouring kerosene on drains and cleaning up canals; natives resentful of nosy sanitation inspectors; rumours of the British hukkumat creating a malaria scare to drive up quinine sales (which were in a slump at the time after having been monopolised by the Dutch); and rice fields that needed stagnant water (10-12).

After Ross, epidemiology remained a narrow specialist interest for many years until the 1950s when the newly constituted World Health Organisation (WHO) adopted the methods of Ross as developed by George Macdonald to combat malaria. Like Ross, Macdonald had served as a medical officer in British India where he had developed an interest in malaria. By the 1950s he had become director of the famous disease institute in London that bears Ross’s name.

Thus began a defining partnership between the field of epidemiology and the WHO that endures till today. The first step in this alliance was the WHO’s Global Malaria Eradication Programme (GMEP). It was the Americans who had pushed for the GMEP after having used the chemical DDT to eradicate malaria in the USA. Now they were concerned about malaria returning to their shores from the Third World (13, 19). Macdonald wrote a paper for the WHO claiming that his mathematical modelling proved that the increase of mosquito mortality by targeting adult mosquito populations would be more effective against malaria than older methods such as controlling breeding with anti-larval exercises. The WHO used this as the basis on which to intervene around the world with DDT (20).  

Since the GMEP, this “eliminate and eradicate” approach has been the hallmark of the WHO’s thinking on communicable disease. This has also been the approach of Western epidemiologists whose main employer has been the WHO or national disease centres following WHO-prescribed protocols. The eliminate, eradicate and contain approach has also widely influenced public health thinking as it is the WHO that has defined the terms in this field.

The one exception to the WHO’s containment-focussed approach to epidemics has been AIDS. Western AIDS activists successfully campaigned to keep disease policing at bay with the focus, at least in the West, on treatment, i.e., pharmaceutical intervention, rather than “non-pharmaceutical” preventive measures of social and economic repression. Their efforts bore fruit with the discovery of highly effective anti-virals for AIDS. UNAIDS issued a powerful paper early in the Covid pandemic pleading for sensitivity to the stigma and oppression attached to the WHO’s containment-focussed approach (14).

The result of the WHO’s espousal from the start of its existence of the Ross-Macdonald line of epidemiological thinking, together with its influence in this field, has been that epidemiology lost sight of the questions raised by epidemiologists such as Brownlee and, also Greenwood, whose enthusiasm for a statistical approach is said to have been tempered with experience, particularly following the Spanish Flu pandemic (9).

As the arithmetic of epidemiological models got more and more complex, computers became an indispensable part of epidemiological analysis. The first ever use of computer-modelling to make epidemiological calculations was by the WHO in 1968 (15). Given this history, it is perhaps not surprising that the WHO should find an ally in Bill Gates, coming as he does from the tech industry. Bill Gates’s philanthropic foundation, the Bill & Melinda Gates Foundation, is the WHO’s biggest non-governmental funder. During the Covid saga, the public has witnessed Bill Gates’s influence and interest in pandemic research. He has also been the most vigorous non-official advocate of the WHO’s strategy for Covid.

The GMEP was a disaster. In a riveting analysis of why it failed, ex-WHO official, Jose Najera, and others describe how the WHO decided to use the GMEP as an opportunity to free malaria control “from the frustrations of bureaucracy by prescribing autonomous organisations capable of achieving the precise execution of interventions” (13). The WHO had decided that “the wide experience and knowledge of the old malariologists was superfluous and even counterproductive particularly if they persisted in modifying the eradication strategy locally. Therefore, eradication campaigns were entrusted to new, preferably young “malariologists” trained in ‘Malaria Eradication Training Centres’ established by the WHO in several countries.” International funds began to go only to countries that adopted the goals and methods of the WHO.

These are tactics that the WHO and other international organisations use till today, arising out of a culture of holding their vision as being above the concerns and compulsions of national governments and, more importantly, of ordinary people. The enormous clout of a figure like Bill Gates in the WHO, with no official responsibility or accountability, makes the situation all the more uncomfortable.

In its overconfidence about the feasibility of malaria eradication, the WHO refused to consider a less ambitious programme of incremental control or the insights of dissenting experts on the local obstacles to implementing the GMEP. Nareja et al say that “malaria eradication acquired the characteristics of an ideology and control was demonised”. UNICEF chimed in on the side of the WHO to say that control was a “primitive technique” and expressed confidence, laughable in retrospect, that malaria would be eradicated in a matter of years.

All this is reminiscent of the WHO’s dogged insistence early in the Covid pandemic that mitigation, i.e., measures to contain the virus within clusters where outbreaks occur, as opposed to society-wide containment measures, was irresponsible, and that nothing less than containment would do (16).

A more recent example of the failure of WHO’s disease containment approach is with Ebola in West Africa. All the measures of disease containment - contact tracing, hospital isolation of patients, mandatory quarantine, closing of borders and retardation of economic activity - which the WHO believes are routine and beneficial, cause immense suffering and loss (17). They also appear to have little effect. Since 1976, there have been five Ebola outbreaks in Africa under the watch of the WHO. Each outbreak has been bigger and lasted longer than the last, and yet the WHO has applied the same containment approach each time, without questioning whether it may be ineffective, or worse, contributing to the successive rise in epidemic size.

The problem with the WHO’s approach is not just practical, social and ethical, as described above, but also scientific. To understand the problem in the science, we need to follow the path taken by epidemiology since the early 1900s. Following Ross, a series of epidemiologists took up the challenge of divining the internal laws of the rise and fall of epidemics using his hypothesis of this being a function of those infected and susceptible.

Sub-variables for age-stratification, latency and the effect of seasonal variation were added to the model (18-20). New sets of ratios between existing variables were included. The concept of “homogenous mixing” was replaced with that of “heterogenous mixing”.  “Homogenous mixing” had assumed that each person in a population had an equal chance of meeting any other person in it. This was found not to accurately represent contact-patterns in the real world where people interact more with members of their family and social circle than with others in the same population. So the concept of “heterogenous mixing” was introduced to account for this fact.

As more variations were added in the models, epidemiologists found they had to use more complex “stochastic” equations and “Monte Carlo techniques” in them. This significantly increased the arithmetical complexity of the models and really took off only in the late 1960s when the emergence of computers made it possible to carry out the complex calculations involved. I have gone into some detail in the evolution of the process because a number of assessments can be drawn from them that explain the fallacies and pitfalls of epidemiological thinking today. These are discussed below.

The theoretical basis of the model has not evolved much from the starting hypothesis and assumptions tentatively suggested by Ross over a century ago. Stochastic models and computers have increased the complexity of the calculations involved, but not the underlying hypothesis.

As noted earlier, the complexity of these calculations is now so high that only computers can carry them out. As a result, it has become increasingly difficult to test or even fully identify the assumptions and principles underlying the models. Epidemiologist Fred Brauer explains that “detailed models are generally difficult or impossible to solve analytically and hence their usefulness for theoretical purposes is limited, although their strategic value may be high” (18). So the models used today could not even achieve the final step of Ross’s process, which was the testing of the model’s assumptions and hypothesis against actual disease outbreaks.

In the 1950s, Macdonald’s developments on the Ross model were based on field studies in Africa and the discovery of factors that it did not account for, such as, superinfection, reinfection and latency (1, 19).  This kind of analysis was possible so long as the arithmetic was simple enough for the model to be tested by the person studying it. Today, epidemiologists may not even be able to spot where the model goes wrong.

The response of epidemiologists since Ross to the question posed by Brownlee about what accounts for the rise and fall of epidemics, has been to claim that this is explained by the constants emerging from their models. The constant found by Ross was the “c”, which has been described earlier. A few years later, WO Kermack and AG McKendrick, another ex-British Indian medical officer and malaria enthusiast, found constants in the form of threshold population densities and infectivity rates (21). George Macdonald, to whom we were introduced earlier, articulated the constant, which is used till today, of the “R” or “Reproduction Number”, defined as the number of people one infected person can in turn infect (20). Observe how each epidemiologist found models that apparently fit observed epidemic curves and were able to derive constants from these models, even though the successive changes in the model showed the earlier one to have been wrong, atleast to the extent of not accurately or completely accounting for all the factors that drive epidemics. A clear example of this is how homogenous mixing, described earlier, did not realistically reflect contact dynamics in a community. This shows that neither fitting nor the derivation of constants proves that any of these models explain the rise and fall of epidemics. The question posed by Brownlee remains unanswered till today.

A more accurate way of looking at constants such as the “c” or the “R” is that they describe the interplay of the variables used in the model, and nothing much besides.

This process of reasoning from models by pointing to constants derived from them has been critiqued in other model-heavy fields. In their recent book, Radical Uncertainty, the economist John Kay and ex-Governor of the Bank of England, Mervyn King, describe the concept of “mathiness” expounded by the economist Paul Romer in reference to certain financial and economic concepts which they say “exist only within the model” and are rigorous only in the limited sense that “the meaning of each term is defined by the author, and the logic of the argument follows tautologically from these definitions” (22). Stuart Ritchie, a psychologist working in cognitive science, makes a similar critique of “overfitting” where scientists, instead of using the data obtained from experiments to test a hypothesis, make up a hypothesis to exactly fit their data. This is not very illuminating, he says, because: “Most of the time we’re not interested in the workings of one particular dataset…….we’re looking for generalizable facts about the world…” (23).    

The somewhat circular exercise of inferring constants from empirically chosen variables and then claiming that the constants prove the dynamic interplay of the variables is also reflected in the way the concept of “herd immunity” was developed by epidemiologists (1). This concept evolved out of studies about immunization to which the WHO turned after failing with malaria eradication. In the course of undertaking population-wide vaccination drives against diseases such as smallpox, polio and measles, it was found that diseases would disappear from populations even when all its members had not been immunized or infected. Instead of taking this as a hint that the starting assumption of universal susceptibility might be wrong, epidemiologists came up with the idea that this indicated that a certain threshold number of immune persons in the population “protected” the non-immune, thereby giving the community “herd immunity”. This is an example of what had by now become an accepted practice in the field of epidemiology of accommodating the theory to the model, rather than the other way round.

How did epidemiologists measure thresholds for herd immunity? They derived them from the R value. So, like the R, the concept of herd immunity does not come from biological discoveries about disease, immunity or pathogens, but is a mathematically derived quantity that is assumed to represent a biological fact. Macdonald explicitly acknowledged the purely mathematical nature of the R saying that it “is only a concept and not an actual event in nature” (20).

Moreover, though epidemiologists claim that concepts like the Reproductive Number and herd immunity threshold are easily calculated and give a standard by which to assess the spread of disease, they have never been able to agree on either the R or the herd immunity threshold for any disease (30). I have described elsewhere how even in the Covid pandemic, world leading epidemiologists were unable to come up with a clear or stable estimate of the R-naught or initial R for this disease (31).  

A key missing link in our understanding of epidemiology is the mystery of individual variations in susceptibility to disease. We have seen how with Covid, some people fall desperately ill, while others get a mild infection and yet others show either no infection or no symptoms at all. This variation is to be observed even among members of the same household who get infected from a common source at the same time. Susceptibility appears to be highly individual and unpredictable, even among people who have the same level of exposure to the infective agent or share the same age, health-status, co-morbidities or lifestyle. On the global level as well, there have been stark differences in the susceptibility, speed, lethality and severity of Covid infection between, for instance, North America, Europe and the UK, on the one hand, and South Asia, on the other. I have discussed in detail elsewhere how the spread and severity of Covid in the slums and favelas of India, Bangladesh and Brazil, have not been as high by comparison with better off neighbourhoods as would be expected with reference to their much higher congestion, poverty and insanitary conditions (32). These variations were not anticipated or accounted for by epidemiologists in their models.  

Explanations for individual and local variations in transmission, susceptibility and severity can only come from fields like biology or medicine. But there is, as yet, no full explanation from these fields for these variations. The reason for this appears to be the manner in which, since the early 20th century, medical science has been able to find drugs and treatments for diseases without having to answer any very profound questions about the biology of pathogens, or the development of disease in the human host.

In his provocative account of modern medicine, The Rise and Fall of Modern Medicine, British writer and physician, James Le Fanu, describes how almost all major medicines were discovered either in lucky coincidences, similar to the serendipitous discovery of penicillin by Alexander Fleming, or by an empirical process of experimentation, where agents that were found to be effective against pathogens in laboratory tests were administered to patients in clinical trials to see if they could effect a recovery (24). More importantly for our discussion, Le Fanu describes how no medicine has ever been discovered from “first principles”, i.e., a process of reasoning about the scientific principles of disease, immunity and pathogens that would lead to conclusions about what kind of chemical or drug intervention would cure a given disease. This process allows treatments to be found even without a very deep conceptual understanding of communicable disease or immunity. The cellular understanding of pathogens and their action on human cells did eventually follow the discovery of drugs, but this has not, as yet, been able tell us why the collection of cells that is the individual will develop illness in one case, while not in another.

It may be that the processes governing immunity and disease are so multifaceted that there is no one answer to the question of what decides them. If this is the case then we have to consider not just whether the quantitative analysis as done by epidemiologists is reliable, but also whether it is suitable as a method for understanding epidemics. Perhaps it is time for epidemiologists to look at other things than their algorithms to solve the riddle of disease.

It may be time for many other fields to look elsewhere than algorithms for answers. The technique just described in medical science of finding solutions by using empirical methods to “pole vault”, to use Le Fanu’s expression, over fundamental aspects of the problem that are not known or understood, is to be found in many other fields. We learnt about the use of modelling in the field of finance when it spectacularly failed in the World Financial Crash of 2007-8.

In the bubble years preceding the Crash, the finance world went into a hiring spree of mathematicians. Some of them, like Cathy O’Neill and Adam Kucharsky, have given vivid accounts of the misconceived use of predictive modelling that led to the Crash (2, 25). Rather disappointingly, neither of them has applied the same critique to the modelling by epidemiologists for Covid.

Regarding the Crash, O’Neill explains how reliance on models can be misleading as they are simplifications or “toy versions” of the real world and depend on the assumption that past patterns will repeat. The assumption with sub-prime mortgages was that everyone would not default on loans at the same time. But then they did, crashing financial markets.

Kucharsky describes how the apparent diversification of portfolios (which is supposed to reduce risk) was undermined by the growing interdependence of banks and other players in the market (2). Kay and King show how the assumption of the randomness of defaults went wrong owing to increasingly careless lending without due regard to creditworthiness. These are things that require a conceptual understanding of market dynamics that quantitative analysis cannot provide (22).  

Kay and King take the critique further, arguing that models will always fail in the real world, except in a narrow range of situations where the phenomenon you are looking at is governed by simple known rules that are “stationary”, such as in games of chance, or unchanging over large periods of time, such as in meteorology and cosmology. They say that in politics, finance and business “the existence of a historic data set does not yield a basis for calculating a future probability distribution” (22).

I wrote earlier that the theory of epidemiology has not developed much since the basic hypothesis suggested by Ross. To be fair, a lot of scientific theory has remained where it was since the early 20th century. But these theories, such as relativity and quantum mechanics, are much more sophisticated, profound and altogether in a completely different class to Ross’s mosquito theorem and principles of a priori Pathometry. Even in science, though, there is indication of a certain tiredness setting in around the heavily empirical, model-based scientific method that has now been in use for the better part of the last one hundred years.

Have we come to a reckoning in the sciences?
(Pic: legacy1995, 123RF.com. Salvador Dali, The Persistence of Memory)

The theoretical physicist, Lee Smolin, describes a process that is going on in quantum physics that looks similar to the one we saw in epidemiology (26). We are getting more and more models that throw up more and more constants. No one is able to give a full explanation of what exactly these constants are supposed to represent. Each new model gives new predictions not explained by existing principles. One strategy used is to adjust the constants to remove the new prediction in a manner reminiscent of epidemiologists frantically adjusting their R over and over (four times in the course of three weeks in the case of one world leading epidemiological team) to match the observed Covid case growth rate (31).

Another strategy is to explain the new predictions by positing the existence of a new particle or force. The work at CERN, the famous physics research institute in Geneva, and other particle accelerators, is to find such particles, or rather, since we are now at the very font of applied probability, to find the probability of the existence of these particles. But even when such particles are (probably) found, no one has a clear idea of what this means for the larger questions that physics has been grappling with since the 1920s, such as how to reconcile quantum mechanics with gravity, because while they may explain the model that posited them, they do not unify (provide a conceptual principle explaining) the forces and particles already (probably) found and their existence in turn throws up more unanswered questions.

The philosopher Karl Popper traces the origins of the current method of physics to the attempt by physicists in the 1920s to resolve the wave-particle duality of subatomic particles (the fact that the same particle could be represented in mathematical equations both as a particle and a wave) by interpreting wave equations as giving the range of probabilities within which a particle could be found. Popper quotes from a seminal paper on quantum mechanics (Elementare Quantenmechanik) by Max Born and Pascual Jordon where they say: “The experimental methods of atomic physics have…..become concerned, exclusively, with statistical questions. Quantum mechanics, which furnishes the systematic theory of the observed regularities, correspond in every way to the present state of experimental physics; for it confines itself, from the outset, to statistical questions and to statistical answers [emphasis added]” (27).

This was not an undisputed choice. Albert Einstein spent all his years after discovering the General Theory of Relativity, to developing an alternative approach. In a letter to Popper he says of quantum theory that: “A [method of] description which, like the one now in use, is statistical in principle, can only be a passing phase, in my opinion.” Einstein was not able to carry his colleagues on this matter. Indeed, his efforts were marginalised and derided by quantum physicists.

This brings to mind Thomas Kuhn’s observations of the resistance in the scientific community towards ideas that challenge the paradigm to which it has committed (28). According to Kuhn, science has evolved through a series of contests between “normal science” and “revolutionary science”. Kuhn says that normal science eventually concedes to revolutionary science when it is unable over a long period to explain anomalies and reconcile contradictions with the operating paradigm that appear in the course of the practice of normal science. This is an optimistic view of science. Kuhn does not consider the possibility that we may exhaust the limits of normal science without having found a revolutionary science with which to replace the old paradigm. There is also the more prosaic problem of ceding ground to new ideas and approaches when eye-watering sums of money have been spent and far-reaching decisions of public policy, like lockdown, have been made on the basis of the claims of normal science under the prevailing paradigm.

Has science, like banks, become too big to fail? Have we reached the end of the normal science spawned by the great scientific revolutions of the early 1900s? Do we need to revisit Einstein’s objections to the statistical way of doing science?

Smolin argues that we have come to an impasse in theoretical physics because of “a style of doing science that was well suited to the problems we faced in the middle part of the twentieth century but is ill suited to the kinds of fundamental problems we face now. The standard model of particle physics was the triumph of a particular way of doing science that came to dominate physics in the 1940s. This style is pragmatic and hard-nosed and favours virtuosity in calculating over reflection on hard conceptual problems [emphasis added]. This is profoundly different from the way that Albert Einstein, Neils Bohr, Werner Heisenberg, Erwin Schrodinger, and other early-twentieth-century revolutionaries did science. Their work arose from deep thought on the most basic questions surrounding space, time, and matter, and they saw what they did as part of a broader philosophical tradition, in which they were at home” (26).  

Besides these theoretical issues, the integrity of the process itself seems to be fraying. There is a growing body of writing from science researchers, such as Stuart Ritchie, testifying to hair raising levels of model-engineered phoniness and cheap tricks, from overfitting, which we discussed earlier, to “p-hacking”, where data is tweaked to clear the threshold of “statistical significance” allowing “noise” (chance patterns in the data) to be published as findings, to the use of smaller and smaller data-sets, leading to chronically exaggerated findings that Ritchie likens to a “giant shadow cast by a moth sitting on a lightbulb”, and the failure to replicate even findings that are published by established scientists in prestigious journals (23).

Ritchie makes a straightforward case about the process being corrupted, but there may be a deeper problem. Scientists may be up against the very arbitrariness of the probabilistic method that has now begun to give negative verdicts for what is really good science or, in the case of clinical practice, good medicine. The problem may not be the p-hacking or rigged clinical trials but the standard of statistical significance itself and, in medicine, the clinical trial protocols that keep failing medicines and therapies (including natural therapies) that both doctors and laypersons observe to be working in practice .

We seem to have come to a reckoning in the sciences.

Kay and King make a plea for a change in direction from probabilistic thinking in economics, which took over with the rise of the Chicago School, and propose, instead, the principle of “radical uncertainty” which says that in most real-world situations as we do not know all the probable outcomes, it is meaningless to frame questions in terms of choices or “optimisation” between them (22). Instead of wishing away radical uncertainty with the false certainty of models, we should make decisions incrementally, as we already do in ordinary life when we are not being advised by specialists trained in probabilistic thinking, using our judgment, experience and intuitions as a guide or inspiration, but not deterministically.

The power of this approach is that it allows us to keep aiming for the best decision without pretending that we know everything about the situations that we confront. Instead of keeping the eye pinned to the assumptions of the model, the principle of radical uncertainty allows it to roam across the terrain being assayed.

These ideas could help epidemiology to become more humane and practical. Such an approach would have compelled a more serious and sober consideration by public health experts of a mitigation rather than a containment strategy at the start of the Covid pandemic.

As a subject that affects the lives of people so intimately, epidemiology cannot afford to hide behind numbers. It has to accept the radical uncertainty of interventions in the social sphere. It has to widen its eye from trade-offs between S-I-R compartments to appreciate the wider, and brutal, trade-off that the ‘eliminate, eradicate and contain’ approach to disease demands (32). It has to recognise that disease suppression is also damaging and destructive, including to health and life.  

By avoiding lockdown and continuing with a resumption of social and economic activity despite cases touching nearly a lakh every day, when a few months ago India went into strict lockdown when nationwide cases were at a few hundred, we have already accepted this in practice, if not in principle. Had we accepted the limits of our knowledge with the novel coronavirus, then we would have taken this incremental approach from the start – of continuing life as best as we could, while mitigating the effects of the virus, also as best as we could.

The Covid pandemic has shone the light on the failures of the WHO with disease containment. Failures that have been going on since its inception, away from public attention, mostly in remote regions of Africa. Containment has been failing since the WHO’s Global Malaria Eradication Programme. We need to abandon grandiose projects of disease containment. They have proven to be scientifically mistaken and practically unfeasible. With some of these novel influenza-type viruses we may be paying the price for eradication-focussed public health strategies that have deprived us of the benefit of natural dynamics of population-wide immunity such as reduced competition between viruses, over-sterilised environments, reduction of infection-acquired immunity in childhood and reduced opportunity to acquire cross-immunity.  

Instead of having blind faith in epidemiology, we have to place it in the context of its own history, and the larger problems with the way science is being practiced today. We should always do this with any science and any model.

We have to see the question of how to respond to Covid as a question that belongs to the wider field of thinking about the social and ethical problems arising from scientific and technological interventions in nature and society. In India, we already have a lively history of thought and activism on these issues. We will find a path forward in the work on the Green Revolution, population control, dams, nuclear energy, Genetically Modified seeds and theory of science by Ashis Nandy, Shiv Visvanathan, Vandana Shiva, Claude Alvares, John Dayal, Medha Patkar and Arundhati Roy, to name but a few of the distinguished thinkers and activists on these subjects. We have to dust off the eccentric musings of Mahatma Gandhi on science and technology, and view them afresh in light of our experiments with science since his time.

It is interesting how again and again we have encountered the USA taking the process in various fields on the mechanistic path that this essay seeks to challenge. Perhaps this is what we should have expected to find, studying as we were the forces and circumstances of a century that have led us to the present moment in science; a century that belonged to the USA.

Where do we go from here?  That is the question. We have to snap out of the hypnosis induced by those exponential epidemiological curves, which have been sagging rather logarithmically for a while anyway, and start thinking…thinking hard, fast and furious, as if our life depended on it…because it does.

Suranya Aiyar is a lawyer, with a graduate degree in mathematics.

This was written at the request of a sociologist academic one of whose fields was theory of science. However, the publication of a journal issue on Covid that I was assured was to come out never did materialise and now I am publishing it here.

Notes and References

(1)    Fine PEM, ‘Herd Immunity: History, Theory, Practice’, Epidemiologic Reviews, 1993, Vol. 15, No. 2, pg. 265, https://doi.org/10.1093/oxfordjournals.epirev.a036121.

(2)    Kucharski A., ‘The Rules of Contagion’, Profile Books Ltd., 2020.

(3)    Brownlee J, ‘Certain Considerations on the Causation and Course of Epidemics’, Royal Society of Medicine, 21 May 1909, Sage Publications (2016), https://journals.sagepub.com/doi/pdf/10.1177/003591570900201307.

(4)    Ross R, ‘Some a Priori Pathometric Equations’, The British Medical Journal, pg. 546, 27 March 1915, doi: https://doi.org/10.1136/bmj.1.2830.546

(5)    Ross R & Hudson H, ‘An Application of the Theory of Probabilities to the Study of a priori Pathometry’, Proceedings of the Royal Society available at https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1916.0007

(6)    See also Ross R, ‘The Mathematics of Malaria’, Special Correspondence, The British Medical Journal, pg. 1023, 29 April 1911, doi: https://doi.org/10.1136/bmj.1.2626.1023

(7)    Aiyar S, ‘Dodgy Science, Woeful Ethics’, Seminar, September 2020 and Covid 19: Getting it Wrong, and Making it Worse under ‘What is Epidemiology’ at https://covidlectures.blogspot.com/2020/07/fullpaper060720.html

(8)    See (29) and ‘The Prevention of Malaria’, Reviews, The Indian Medical Gazette, January 1911 and February 1911, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5171386/pdf/indmedgaz71689-0030.pdf, Ross, R, ‘ “The Prevention of Malaria” A Review Reviewed’, The Indian Medical Gazette, pg. 154, April 1911, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5171464/ and ‘Review of “A Review Reviewed” ’, The Indian Medical Gazette, pg. 155, April, 1911, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5171464/.

(9)    Hardy A & Magnello, ME, ‘Statistical Methods in Epidemiology: Karl Pearson, Ronald Ross, Major Greenwood and Austin Bradford Hill 1900-1945’, A History of Epidemiologic Methods and Concepts, edited by Alfredo Morabia, Birkhauser Basel, 2013.

(10) Nathan R, Thornhill HB and Rogers L, ‘Report on the Measures Taken Against Malaria in the Lahore (Mian Mir) Cantonment, 1909’, Wellcome Collection, https://wellcomecollection.org/works/c98gw9ax/items?canvas=19&sierraId=b21351569.

(11) Bhattacharya N, ‘The Logic of Location: Malaria Research in Colonial India, Darjeeling and Duars’, 1900-30, Medical History, 2011, 55: 183-202, doi: 10.1017/s0025727300005755.

(12) Deb Roy R, ‘Quinine, mosquitoes and empire: reassembling malaria in British India’, 1890-1910, South Asian History and Culture, 4:1, 65-86, doi: 10.1080/19472498.2012.750457.

(13) Najera J, Gonzalez-Silva M and Alonso, P, ‘Some Lessons for the Future from the Global Malaria Eradication Programme (1955-1969)’, PLoS Medicine, January 2011, Volume 8, Issue 1, pg. 1, e1000412, doi: 10.1371/journal.pmed.1000412.

(14) UNAIDS, ‘Rights in the time of Covid-19’, 20 March 2020. Link: https://www.unaids.org/en/resources/documents/2020/human-rights-and-covid-19 . See also Aiyar S, ‘Covid-19 and Lockdown of Human Rights’, Live Law, 8 August 2020 https://www.livelaw.in/columns/covid-19-and-lockdown-of-human-rights-161170.

(15) Macdonald G, Cuellar CB & Foll CV, ‘The Dynamics of Malaria, World Health Organisation Bulletin’, 1968, 38, 743-755.

(16) Aiyar S, ‘Covid 19: Getting it Wrong, and Making it Worse’ under ‘The WHO’s deep confusion about pandemics’ at https://covidlectures.blogspot.com/2020/07/fullpaper060720.html

(17) See, for instance, Wilkinson A and Leach M, ‘Ebola – Myths, Realities and Structural Violence, African Affairs’, pp.1-13, 4 December 2014, http://www.ebola-anthropology.net/wp-content/uploads/2014/12/Briefing-Ebola-Myths-Realites-and-Structural-Violence.pdf ; Loignon C, Nouvet E, Coutourier F, et al., ‘Barriers to supportive care during the Ebola virus disease outbreak in West Africa: Results of a qualitative study’, PLOS ONE, 5 September 2018, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201091; Garret L, ‘Plague Warriors: The Deadly Ebola Outbreak in Zaire’, Vanity Fair 1 August 1995, https://archive.vanityfair.com/article/1995/8/plague-warriors; Human Rights Watch, ‘Congo’s Ebola fight has lessons for Covid-19’, 26 March 2020,  https://www.hrw.org/news/2020/03/26/congos-ebola-fight-has-lessons-covid-19; ‘Was DR Congo’s Ebola virus outbreak used as a political tool?’, The Lancet, Editorial, Vol. 393, 12 January 2019, https://www.thelancet.com/action/showPdf?pii=S0140-6736%2819%2930002-9; and Aiyar S, ‘Covid 19: Getting it Wrong, and Making it Worse’ under ‘The misery and failure of disease containment for Ebola’ at https://covidlectures.blogspot.com/2020/07/fullpaper060720.html

(18) Brauer F, ‘Mathematical epidemiology: Past, present and future’, Infectious Disease Modelling, May 2017, 2(2): 113-127, doi: 10.1016/j.idm.2017.02.001.

(19) Smith D, Battle K, Hay Simon et al, ‘Ross, MacDonald, and a Theory for the Dynamics and Control of Mosquito-Transmitted Pathogens’, PLoS Pathogens, April 2012, Volume 8, Issue 4, pg. 1, e1002588, https://doi.org/10.1371/journal.ppat.1002588.

(20) Macdonald G, ‘Epidemiological Basis of Malaria Control’, World Health Organisation Bulletin, 1956, 15, pg. 613-626.

(21) Kermack WO and McKendrick AG, ‘A Contribution to the Mathematical Theory of Epidemics’, 13 May 1927, Proceedings of the Royal Society, doi.org/10.1098/rspa.1927.0118.

(22) Kay J & King M, ‘Radical Uncertainty’, The Bridge Street Press, 2020.

(23) Ritchie S, ‘Science Fictions’, Metropolitan Books, Henry Holt and Company, 2020.

(24) Le Fanu, J, ‘The Rise & Fall of Modern Medicine’, Abacus, 2011.

(25) O’Neill, C, ‘Weapons of Math Destruction’, Allen Lane, 2016.

(26) Smolin, L, ‘The Trouble with Physics’, Penguin Books, 2006.

(27) Popper, K, ‘The Logic of Scientific Discovery’, Routledge Classics, Second Indian Reprint, 2012 (first published in 1934). This book expounds the seminal notion of “falsification” which is widely, but in my view wrongly, interpreted as a justification for preferring empirical to conceptual or deductive methods of science. Popper, in fact, argued from the outset in this book that empirical thinking was a form of deductive thinking where singular statements were used to falsify universal statements. He used falsification to demarcate empirical science from what  he termed as “metaphysics”: logic, mathematics and the ideas, inspirations and conceptual speculations that set the stage for the conduct of empirical science. But in distinguishing empirical science from metaphysics he explicitly says that he is not setting out to deny the validity of metaphysics or to banish logic, mathematics or conceptual thinking and inspiration from the field of science. He repeatedly acknowledges the intrinsic link between the two categories of thought in science and uses the demarcation merely as a device to clarify how empirical science works within the framework of deductive thought. The misreading of Popper has played a big role in relegating conceptual thinking in science to a lower position than empirical thinking, and also of giving the stamp of science to some rather dubious so-called “quantitative” methods in sociological work. But it was not Popper’s intention to posit empirical thinking as being superior to deductive thinking, and his position on how science works from metaphysical to empirical, and vice versa, was consistent with that of Thomas Kuhn (see (28, below)) on normal and revolutionary science.

(28) Kuhn, T, ‘The Structure of Scientific Revolutions’, The University of Chicago Press, 2012 (first published in 1962).

(29) Ross R, ‘Some Quantitative Studies in Epidemiology’, Nature, 5 October 1911, pg. 466.

(30) See (1) for lists of R-values and threshold herd immunity estimates for the same diseases by different epidemiologists.

(31) Aiyar S, ‘What the Imperial College Report said’,  https://covidlectures.blogspot.com/2020/07/covid-lectures-part-2-what-imperial.html

(32) Aiyar S, ‘The Injustice & Violence of Lockdown’, 18 July 2020 at https://covidlectures.blogspot.com/2020/07/covid-lectures-part-6-injustice-and.html and ‘Mumbai Slums’ Battle with Covid Defies Early Expectations’, NDTV Blog, 6 August 2020 at https://www.ndtv.com/opinion/mumbai-slums-battle-with-covid-defies-early-expectations-2273738.



Comments