Elections, voter apathy and all that

May 13, 2009

We made one last attempt to go and check our names in the voters’ list and cast our vote and returned
disappointed. This is the third time our names have vanished from the rolls. (Is this a conspiracy – or sheer incompetence?). In any case I take heart from this little note that my friend and colleague Arunava Sen from ISI Delhi has written for this blog…take heart all ye of little faith – you don’t matter anyway🙂

We have recently been subjected to a barrage of messages from corporations, Bollywood stars and page three familiars exhorting us to go out and vote. The subtext is that if you do not vote, you do not “care”. After the poor turnout in South Mumbai, there was much anguish and soul-searching by perpetually anguished, professional soul-searchers such as Barkha Dutt on exactly this matter.

I wish to point out that it is perfectly rational for voters who “deeply care” (in a sense which I will make precise below) to abstain from voting. The argument is very simple and runs as follows. It is extremely improbable for a voter to be able to influence the outcome in a large election. A voter can be influential only if the other voters are exactly divided in their votes for the best candidate. One does not need a Ph.D in probability theory to realize that (i) this is an extremely unlikely event and (ii) this probability will decline rapidly as the number of voters increases. For instance if there are only two candidates and voter preferences over the candidates are equally likely, then the chances of being influential is 0.5 if there are three voters, about 0.03 if there are 1000 voters and very close to zero if there are 10,000 voters. Therefore, even though you care deeply about the outcome of the election, your expected payoff from voting is likely to be very small; if you have to offset these gains against the cost of voting (these costs are not necessarily monetary; they may represent the discomfort of standing in queues and so on) you may decide quite rationally not to vote even if these costs are very low (as they are in places like Delhi and Mumbai).

The argument above for not voting involves a curious inconsistency. Suppose all voters argued in the same way and concluded that they should not vote. Then every voter would be influential and would gain by voting! A formal game theoretic way of saying this is to say that for all voters not to vote, is not a Nash equilibrium of the game (Nash, here, is John Nash of “A Beautiful Mind”). So what is the Nash equilibrium here? Suppose that there are N eligible voters (N large) with different voting costs denoted by c. Assume that the proportion of voters with voting costs less than c is given by F(c). Clearly F(c) increases as c increases. Assume that each voter benefits an amount alpha (let us not quibble at this moment about how these things are measured) if her preferred candidate wins. A “caring” voter has a large positive alpha and an apathetic one, presumably a small positive one. Let p(n) denote the probability of a voter influencing the outcome when n voters actually vote. It is clear that p(n) declines as n increases. Let c* be a solution to the equation p(NF(c*))alpha =c*. Some harmless assumptions regarding the functions p and F (continuity, etc) will guarantee the existence of a solution. The Nash equilibrium of the game is that voters with costs below c* will vote while those with costs above c*, will not. The point here is that the turnout on which voters’ decisions to vote are based, is exactly the one generated by those decisions.

Is the discussion above “useful” in any sense? I think it is, if you are interested in motivating voters to vote. If your message is “Vote because you can choose a better Government”, you are trying to get voters to increase their alpha. This is not likely to have a large effect because the p(n) term is already very close to zero. A better strategy is to emphasize that voting is duty just like paying taxes and not throwing garbage into your street. The effect of this is to add a positive constant K on the left hand side of the equilibrium equation. Voters get this benefit independently of the outcome of the vote – you can think of this as the “warm glow” you get when they put that ink on your index finger. It is quite easy to verify that if K is large enough, you get a corner solution where all voters irrespective of their voting costs, vote.

The Indus Valley Script

April 25, 2009

The Indus valley script — is it a language or a bunch of pictograms? There is a school of thought which believes it’s a bunch of pictograms — typically of fish, rings, cows’ heads, and men. It seems now that this is not true.

In Ronojoy Adhikari’s (one of the authors of the work) words:

What we have done is to compare the entropy associated with the conditional probability of a token following a given token, in a sequence of tokens. These tokens could be letters in words, words in sentences, base pairs in DNA, keywords in computer, and so on. The functional role of tokens is not studied, but only the order in which they appear in sequences. In other words, we focus on syntax and not on semantics. The entropy of this conditional probability is, then, a measure of how much order there is in the sequence. If token order is irrelevant, as would be in a random collection of tokens, the entropy is large. If token order is highly constrained, the entropy is small.

With this in hand, we compare sequences of both linguistic and non-linguistic tokens : English, both words and letters, Sumerian, Old Tamil, Sanskrit (linguistic), and DNA code, Fortran code, Kudurru inscriptions and Vinca symbols (non-linguistic). The entropy of all the linguistic systems falls within a narrow band, while the non-linguist sequences either have large (DNA, …) or small (Fortran, …) entropy.

Repeating the same for the Indus sequences, we find that they fall right in the middle of the linguistic band. Thus, in the sense of syntax, the Indus script is far more akin to natural language, than to non-linguistic systems like DNA, Fortran, Kudurru and Vinca.

Authors of this work are Rajesh Rao of the University of Washington along with Nisha Yadav and Mayank Vahia at the Tata Institute of Fundamental Research in Mumbai, India; Hrishikesh Joglekar, Mumbai; R. Adhikari at the Institute of Mathematical Sciences in Chennai, India; and Iravatham Mahadevan at the Indus Research Center in Chennai. The research was supported by the Packard Foundation and the Sir Jamsetji Tata Trust.

More information about this work is in Science Daily, the earlier ‘foundation’ paper is on the arXiv and the Science paper is available here (requires subscription). Some information about the Indus Valley Civilisation is here.

Tailpiece: Steve Farmer et al., the original proponents of the ‘Indus Valley script is not a language’ have put up a refutation of the above work (in somewhat intemperate language methinks) here.

Abel Prize

April 16, 2009

The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2009 to the Russian-French mathematician Mikhail Leonidovich Gromov (65) for “his revolutionary contributions to geometry”. The Abel Prize recognizes contributions of extraordinary depth and influence to the mathematical sciences and has been awarded annually since 2003. It carries a cash award of NOK 6,000,000 (close to € 700,000, US$ 950,000). Mikhail L. Gromov will receive the Abel Prize from His Majesty King Harald at an award ceremony in Oslo, Norway, May 19.

Here, courtesy my colleague Kapil Paranjape, is a short layman description of the work that got Gromov the prize, followed by a link to a longer exposition.

At the end of the 1950’s, it was felt that there was a good mathematical theory of the geometry of (classical/non-quantum) physical systems. Broadly, this could be called the study of connections on principal bundles on manifolds or, to use physics terminology, the study of gauge fields; we will refer to these as Cartan geometries below. The qualitative properties of such geometries can be obtained by studying their topological invariants; which can be thought of as properties that do not change under continuous operations like stretching. (Topology is sometimes called “rubber-sheet” geometry).

This became the background in which an enormous number of beautiful theories (like cobordism, Index theory and so on) were studied through the 1960’s, 70’s and 80’s.

This formulation of geometry required the physical system to have an infinitesimal homogeneity (in other words, the laws of physics are to be given by differential equations involving tensors and spinors). From a mathematical perspective strong notions of continuity, such as differentiability, were essential to this approach to geometry.

Mikhail Gromov showed how we can study geometric properties without retaining homogeneity or continuity.

The key mathematical definition is that of quasi-isometry. Gromov’s definition allows us to “tear” space and “re-stitch” it differently provided that these operations are small in comparison to the scales at which we want to examine the space; the resulting geometry still shares some “coarse” geometric properties with the older one. In particular, it is possible to detect whether the geometry is negatively curved (i.e. like the non-Euclidean geometry of Bolyai and Lobachevsky). In addition, one can study the quasi-symmetries of the geometry (which are quasi-isometric transformations of space to itself). This leads to rigidity results that bind groups of symmetries more tightly with the kinds of spaces that they can act on.

There are a number of physical systems (ensemble systems like sand-piles and glass or biological systems) that do not exhibit the kind of homogeneity that Cartan geometries have. It certainly seems as if Gromov’s coarse structures are more applicable in such cases. Further refinements are required before one can design and carry out experiments that will confirm these expectations.

For those interested in the interface between geometry and physical systems, the 3G technology of Geometry, Groups and Gromov is worth exploring.

For a longer (layman) exposition on the subject see here.

A reply to Dyson

April 13, 2009

This is a guest blog post by my colleague R. Shankar

Dyson starts of with a critique of climate scientists.

“But I have studied the climate models and I know what they can do. The
models solve the equations of fluid dynamics, and they do a very good
job of describing the fluid motions of the atmosphere and the oceans.
They do a very poor job of describing the clouds, the dust, the
chemistry and the biology of fields and farms and forests. They do not
begin to describe the real world that we live in. The real world is
muddy and messy and full of things that we do not yet understand. It is
much easier for a scientist to sit in an air-conditioned building and
run computer models, than to put on winter clothes and measure what is
really happening outside in the swamps and the clouds. That is why the
climate model experts end up believing their own models.” -Dyson-

Somewhat unkind. All earth scientists I have talked to are acutely aware
of the limitations of the models. The discussion in the IPCC report also
reflects this. Eg to quote from their latest report (AR4, pp 113)

“A parallel evolution toward increased complexity and
resolution has occurred in the domain of numerical weather
prediction, and has resulted in a large and verifiable improvement
in operational weather forecast quality. This example alone
shows that present models are more realistic than were those of
a decade ago. There is also, however, a continuing awareness
that models do not provide a perfect simulation of reality,
because resolving all important spatial or time scales remains
far beyond current capabilities, and also because the behaviour
of such a complex nonlinear system may in general be chaotic”
-IPCC report-

Much of the effort in climate sciences is in observation and data
collection. I would put the number who “sit in airconditioned offices and
run computer models” as a very small fraction of the total.
The IPCC report is based on a huge amount of field observations and data.

Next he makes a statement:

“There is no doubt that parts of the world are getting warmer, but the warming
is not global. I am not saying that the warming does not cause problems.
Obviously it does. Obviously we should be trying to understand it
better.” -Dyson-

When people say global warming, what is meant is that the average
global temperature has increased by about 1 degree C in the past
century. This is based on instrumental observations which have been
taken by Met stations all over the world. Nobody says that it is
uniform in all parts of the globe.

The reason to worry about this one degree per 100 years is that the
“natural” rate of temperature change (due to the glacial cycles which
Dyson also discusses) is about 10 degrees in 100,000 years i.e 1 degree
in 10,000 years. This is concluded from the ice-core data which goes back
to 800,000 years. Even during the sharp rises and falls the rate never
exceeded about 1 degree per 1000 years. So the current rate of increase
is abnormally high.

Coincident with this rise is the rise of C02 levels. It is 380 ppm today
and has never exceeded 300 ppm in the past 800,000 years.

He then says:

” I am saying that the problems are grossly exaggerated. They take away
money and attention from other problems that are more urgent and more
important, such as poverty and infectious disease and public education
and public health, and the preservation of living creatures on land and
in the oceans, not to mention easy problems such as the timely
construction of adequate dikes around the city of New Orleans.”

While development and conservation efforts could definitely be much
more, I don’t think that the hype about climate change is a significant
cause for them being less that what they should be.

He then talks about what are called “geo-engineering solutions” (there
are many such in the market) but without mentioning if any serious research
has been done to back his statements.

He then says:

“When I listen to the public debates about climate change, I am
impressed by the enormous gaps in our knowledge, the sparseness of our
observations and the superficiality of our theories. Many of the basic
processes of planetary ecology are poorly understood. They must be
better understood before we can reach an accurate diagnosis of the
present condition of our planet. When we are trying to take care of a
planet, just as when we are taking care of a human patient, diseases
must be diagnosed before they can be cured. We need to observe and
measure what is going on in the biosphere, rather than relying on
computer models.”

There are of course huge gaps in our knowledge (which is why one should
be cautious about implementing geo-engineering solutions) but again he
gives the impression that the the entire case of climate change is based
on simulations. Even a cursory reading of the IPCC reports should
convince anyone that this is not true.

The statements of the recent past (approx 100 years) are based on
observation. The climate models do reproduce average quantities like
global average temperature of the recent past (100 years) reasonably well.
These models are then used to project for the immediate future (next 100)
years. They predict temperature rises that are sensitive to the carbon
emission levels with a worst case of about 4 degrees rise in the next century.

As Dyson points out, the carbon cycle is indeed not well understood and a
lot of fudge factors must be going into the models to make them fit the
past data. One has to therefore use one’s judgment to decide how reliable
they are. But rejecting them completely, in my opinion, is very bad
judgment. A doctor has to make a diagnosis based on whatever
observations and tests he/she has conducted and however incomplete
his/her knowledge of the processes in the human body may be.

In my opinion, the model predictions should be reasonably reliable for
the immediate future where the validity of the fudge factors may not breakdown.
What will happen over time scales of thousands of years is indeed
unpredictable and the IPCC report says nothing about it. The worry is
more about the immediate future (2000-2100). So even if the rise in CO2
levels and temperature is a transient phenomenon of a few hundred years,
we have to worry about it and think about corrective action. Controlling
emissions seems to be the most reliable way.

The details of how the average temperature rise will affect details
of climate is still open (again for the immediate future). eg. I feel that
the questions most relevent to India are how it will affect (i) Agriculture
(ii) Monsoon (iii) Disease. All of them seem to be very open questions.

The next part of his article talks about time scales of thousands of
years where it is really anybody’s guess.

The final part is philosophical and I do not think classifying all the
opinion on this issue into 2 classes is correct (smells of the attitude
“you are either with us or against us”). People have all types
of permutations and combinations of extreme opinions. Nevertheless,
apart from a few fringe elements nobody would deny that ideally we
should aim for a pattern of sustainable development. Of course the devil
is in the details of what is meant by sustainable development but I do
not see any major ethical conflict here.

Even without the climate models, the data (given in the graph)

ccdata
along with the basic physics of the greenhouse effect is enough to convince me that the problem is genuine. (the url of the ice
core data graph is given in the picture, the other 3 graphs are from the
IPCC report AR4, comments below the graphs are mine).

Roddam Narasimhan has pointed out recently that Arrhenius had estimated a rise of 5 degrees C if the CO2 levels in the atmosphere doubled (from what it was in his time). All the complicated climate models also predict roughly the same. So as he put it, the number has not
changed only our confidence in it. Roddam Narasimhan is working on clouds and he
motivated it by saying that this is one of the poorly modelled things.

With this in mind and looking at the graphs, I feel it is really
unlikely that the downturn in CO2 levels and temperature will come due
to natural processes alone (if emissions are not controlled) within a few hundred years (if at all).

Are you a humanist or a naturalist?

April 13, 2009

Naturalists believe that nature knows best. For them the highest value is to respect the natural order of things. Any gross human disruption of the natural environment is evil. Thus, excessive burning of fossil fuels is evil.

The humanist ethic begins with the belief that humans are an essential part of nature. Through human minds the biosphere has acquired the capacity to steer its own evolution, and now we are in charge. Humans have the right and the duty to reconstruct nature so that humans and biosphere can both survive and prosper. For humanists, the highest value is harmonious coexistence between humans and nature.

Thus speaks the great theoretical physicist Freeman Dyson in one of his most thought-provoking and ‘heretical’ essays. It is possible to disagree with him and yet appreciate the caution he is advocating. Dyson’s views on (non) global warming are by now legion, but here he also discusses other matters.

Confirming Einstein?

November 23, 2008

Guest Post by Sourendu Gupta, TIFR, Mumba

UPI today picked up what is probably its first ever
news story about lattice gauge theory
. This is a method of dealing with a
quantum field theory which is usually applied to problems where nothing else
works, and is heavily dependent on modern supercomputers. The news is about
an
application to computing the mass of a proton
by Stephan Duerr and his
collaborators. If you are not familiar with particle physics and field
theories, then think of it as computing
Avogadro’s
number to three digit precision
using as input only the standard model
of particle physics.

Quantum field theories inherit infinities from classical theories of
matter: most well-known of which is the infinity encountered in Lorentz’s theory
of the electron
. Because of such infinities, classical theories cannot
manage to explain the structure of matter, ie, the masses of elementary
particles, and their basic interactions. However, quantum theories can
remove these infinities and make precise predictions about physical
quantities. The process by which this is done is called renormalization.

In the 1970’s Kenneth
Wilson
exploited a deep connection between quantum theory and
statistical mechanics to understand the physics of renormalization. Since
then his insights have permeated all theories of matter and started a quiet
revolution which has gone largely unnoticed outside the world of theoretical
physics. However, Wilson’s way of understanding renormalization has
provided solutions to many outstanding problems: the computation of Avogrado’s
number starting from particle physics being just one.

Mass media, however, recognize Einstein as the sole repository of genius
in the sciences. Hence the connection with him in UPI’s report, and the
invocation of his name by media science in general. To the
extent that particle physics uses relativistic quantum field theories, the
report by UPI is certainly not wrong. E=mc2 is certainly
important (again, for the umpteen millionth time) and the supercomputers
used most definitely treat the theory on a space-time lattice. However these
are not the most exciting things about the result reported.

For those who attend the
Lattice Meeting
each summer, the exciting aspect of this work is that it is one of several
this year which compute the masses of
the proton and other
hadrons
with high accuracy. Lattice gauge theory is now testably one
of the most accurate methods of dealing with quantum field theory.

You might expect such a powerful technique to have other things to say. It
does. Other works have begun to predict new and as yet unobserved hadrons,
some of which may well be seen at the
LHC,
the Beijing synchrotron,
the Jefferson lab or the Japanese collider
J-PARC. Results from lattice QCD
are also important in tests of CP violations, for which one half of
this
year’s Nobel prize in physics
was awarded.

Interestingly,
the
other half of the same Nobel prize
is closely related to another
prediction of lattice gauge theory: that of a phase transition to
a completely new state
of elementary particle matter
; one in which there are no hadrons. The
reverse phase transition is expected to have occurred within the first
microsecond of the history of the universe. This kind of matter may already
have been created in a lab: the RHIC.
It will be studied further in the LHC.

We are now firmly in the era of lattice gauge theory as a major tool in the
box of tricks for theoretical particle physics. This is the place where
quantum physics,
relativity and supercomputing
come together. The newspaper report you
saw may have got it wrong, but it wasn’t completely wrong.

The Nobel Prize – this year’s picks

October 5, 2008

Here are some picks from various sources including Physics World:

Daniel Kleppner : Hydrogen Maser

Perlmutter and Schmidt: Increasing expansion rate of the universe and hence dark energy postulate

Guth and Linde: Inflationary scenario

Berry and Aharanov: Aharanov-Bohm effect and also the related Berry phase

Pendry and Smith: Negative refraction

Penrose, Hawking: various developments in General Relativity and Cosmology

Suzuki (Super-K) and Macdonald (SNO): neutrino oscillations

Unfortunately the Guth-Linde inflationary picture is not yet completely confirmed experimentally, and Penrose and Hawking do not have any specific prediction tested by experiment which is what the Swedish Academy looks for in theory prizes.

Do put in your nominations — even though the Swedish Academy is probably not one of the regular readers of this blog😦

Cultural Relativism and the end of the world

September 11, 2008

Cultural relativism is defined by Wikipedia as the principle that an individual human’s beliefs and activities should be understood in terms of his or her own culture. Unfortunately, in recent times and in popular thought and discussion, it has also meant that all views are equally valid (after all, each view is a function of the person’s own cultural mileau). This point of view, which is the result of taking an idea to its extreme limit in this day of political correctness, was forcibly brought home to me on the issue of the doomsday scenario being predicted as a result of the LHC startup.

Television and newspaper reports have been full of predictions of black hole formation, that will eat up the earth. (In India, the two channels particularly guilty of this hype have been Aaj Tak and India TV two sensation mongering channels). Unfortunately the net result has been that scientists have been scrambling to give well reasoned arguments why this is all hogwash, taking precious time away from doing more useful work.

While it is true that scientists have occasionally been guilty of arrogantly dismissing the public’s right to know what kind of research they are doing with public money, it is also important to dismiss crackpots as crackpots. After all, we don’t engage ‘flat-earth’ proponents in any serious debate. Both sides of an issue do not carry equal weight in such arguments. To take a example more relevant to the US, creationism and evolution are not two equally valid theories of evolution of mankind. The same applies to the doomsday scenario.

Of course it’s another matter that the theories themselves that predict black hole formation are pretty far-fetched in my opinion, that make many assumptions of the nature of space-time and the kind of particles that live in some extra dimensions. However, these are still scientific theories, published in peer-reviewed respected scientific journals, and hopefully subject to being falsified. They cannot, on any account, be compared to theories propounded by crackpots and eccentrics with only a nodding acquaintance with the structure and methodologies of science, and in this particular case, high energy physics. Therefore, after having taken sufficient trouble to quantitatively demonstrate why these fanciful ideas have no basis in fact, scientists should just ignore this phenomena and get back to their work. Otherwise, by engaging these people in prolonged debate, one is conferring ill-deserved respectability on them.

What is most amusing is that most newspapers and TV channels have now gone off the doomsday scenario, believing that yesterday’s beam test by CERN was proof that no black holes that eat up the earth were produced. Ironic when you think that no collisions took place! It just shows that the airheads are ignorant of the very scenario they are purveying. (The Times of India even had an editorial claiming that if you are reading this paper the next day, it means it’s all safe and nothing has happened).

Tailpiece: If anyone is interested in a lay-man level article on this issue, please read this description by Michael Peskin.

Shining Light on a Dark problem

September 7, 2008

Dark Matter is one of those abiding mysteries in physics. Although something like a quarter of our universe is supposed to be filled by dark matter (visible matter is just 4% the rest being another mysterious energy field called, very illuminatingly, dark energy) physicists really have no clue what dark mater is made of. None of the known particles of the Standard Model of Particle Physics fit its properties.

It is now believed that the European satellite PAMELA has some evidence about the nature of dark matter. However, the Italian-led research group which is believed to have this data has kept it a closely guarded secret, apart from a quick flash of a slide in an international conference.

Some physicists now have decided to take matters into the own hands. Arming themselves with a digital camera poised to shoot, they have quickly taken photographs of the slides that were flashed in a conference. Using this ‘photographic’ evidence they have submitted a couple of papers in the preprint
arxiv, giving full acknowledgment to the ‘photographic’ source of their data. These papers are here and here. Both have recreated data from photos taken of a PAMELA presentation on 20 August at the Identification of Dark Matter conference in Stockholm, Sweden.

If these enterprising physicists used a flash to photograph the slides, would it be a case of shining light on a dark problem?

GLAST

August 30, 2008

NASA’s Gamma-ray Large Area Space Telescope (GLAST) has produced one of its first images shown here in this Nature report. It has, as a bonus, produced an image of a blast of gamma rays from a super massive black hole christened 3C454.3 along with images of the Geminga and Crab pulsars.

Most of us have got used to the series of beautiful and stunning images from Hubble over the last many years. Now, as Hubble’s future remains uncertain at the hands of short sighted science policy managers, it has a competitor in another region of the electromagnetic spectrum.

GLAST has now been renamed Fermi Gamma-ray Space Telescope.

In this connection, I have always been slightly leery of having each news item in Nature opened to discussion through the Comments section. Science thrives and progresses through debate and discussion (or through falsification in Popper’s famous equation), but such discussion have meaning when it is between informed participants. Most of the discussions in the comments sections are by tyros and self styled scientists who, as in this case, proclaim there cannot be any black hole or in another recent news item about the LHC, claim that old bromide about how the LHC will destroy the world because of the production of black holes.

What purpose is served by giving crackpots (yes, indeed that’s what they are) this forum to flaunt their ill-informed fantasies, that too in the pages of a science journal like Nature? Here is an extract from a particularly long and fatuous comment: These experiments to date have so far produced infinitely more questions than answers but there isn’t a particle physicist alive who wouldn’t gladly trade his life to glimpse the “God particle”, and sacrifice the rest of us with him. Reason and common sense will tell you that the risks far outweigh any potential(as CERN physicists themselves say) benefits. Who are these CERN physicists and who are these suicidal maniacs who are preparing to lay down their lives to see the Higgs? Does such a discussion column merit an existence?