Thursday, April 17, 2008

Steven Pinker on Religion

I’ve been waiting a while to see if Pinker would say something about religion, rather in trepidation because I feared it would damage my regard for him. Sadly, my worst fears have been realised in his feeble response to the Templeton Foundation’s question "Has Science made Belief in God Obsolete?"

Pinker’s answer is yes, although he admits that all modern rationality is required to do the job to debunking God. The trouble is that his argument is pitiful. It is in three parts. The first says that because science has successfully rendered the Book of Genesis obsolete as a scientific text, religion has lost one of its major purposes. This is a standard atheist trope but it wholly ill-informed about what religion is about. A tiny part of the Bible and other sacred texts deal with quasi-scientific explanations. Furthermore, although many have seen God in His Creation, the design argument is such is a modern invention that only really appears in the eighteenth century. When religious people saw God in the workings of nature, they never thought this was some sort of proof he existed, but rather something to praise him for. The idea that religion primarily provided an exposition on the natural world is a throw back to the nineteenth century for which we have now realised there is almost no evidence.

In short, religion got going without the design argument and it will happily keep going without it as well. That said, I do find some mileage in both the fine tuning and the cosmological arguments. Pinker dismisses both of these with the old saw “who made God?” Fifty years ago, almost all atheists thought the universe was eternal and dismissed as nonsense the idea it had a beginning. It is odd that they won’t extend the same courtesy to God.

In the second part of his argument, Pinker moves to morality. I praised his recent article on the evolution of morals while pointing out its limitations. Pinker admits you can’t get ethics from science but insists you can’t get them from religion either. The argument he uses, Euthyphro’s Dilemma, is 2,500 years old and so I assume does not form part of modern science. The trouble is, Plato’s argument cannot be applied to a transcendent deity. It is quite possible (actually it is true) that morality is not to be derived from natural causes. But we cannot say from this that an external God who created nature cannot impose rules upon her that nature herself is unable to produce. Nor can we say that those external rules, like the truths of mathematics that Godel showed can’t be reduced to axioms, should not be objectively true.

Thirdly, Pinker claims that the Golden Rule is a rational ethical system in itself. It is good to see that Pinker has cracked all the ethical problems that have kept philosophers up at night for centuries, but I fear you cannot reduce morality to such a single rule. Nor can Pinker escape the fact that he is rationalising a system he has inherited from Christianity. Given the state of today’s world, I think it would be foolish to assume that all moral difficulties have been solved.

I’ve mentioned Russell’s Syndrome in the past. This is a medical condition whereby highly intelligent people start talking nonsense as soon as they turn their minds to religion. It is sad to find that Pinker is a sufferer. Would anyone like to hazard a guess from whom he caught it?

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

So is Shakespeare Any Good?

Last Saturday night saw my wife and me at the Roundhouse theatre in north London for a Royal Shakespeare Company (RSC) performance of Shakespeare’s Henry V. It was, I thought, a reasonably good performance although King Henry himself did not dominate the stage in the way that a great actor can do. We saw Matthew Macfadyen playing Prince Hal (that is, Henry V before he became king) at the National Theatre in Henry IV Parts One and Two last year and thought he was outstandingly good. We are very keen to see him tackle the role as King.

On of the things about reading the reviews of performances of Shakespeare’s plays is that they rarely tell you if the play itself is any good. The actors are criticised without much criticism being directed at Shakespeare himself. Assuming that you can’t see all of them, it would be helpful to know which of his plays are most worthy of our time. Here are my thoughts based on the ones I’ve seen in the theatre (rather than just read or watched on TV). Note that for this reason, neither Hamlet nor Othello are on the list.

The Great:
King Lear: The 1990 performance at the National Theatre with Brian Cox as Lear and Ian McKellen as Kent is widely felt to be one of the best of all. For me it remains the finest work of art I have ever experienced. Last year’s RSC production with McKellen in the role of Lear himself was much less impressive but still deeply effecting.

Henry V: The performance on Saturday was only average but the greatness of the play is hard to disguise. This is a searingly honest portrayal of both the horrors and glory of war.

Richard III: The famous hunchback is one of art’s darkest villains, whatever resemblance he may or may not have to the historical king. Again, the performance I saw, at the Young Vic, was not one of the best and needed a better actor in the starring role to dominate the proceedings.

The Good:
Macbeth: Simon Russell Beale took the lead last year in London. He usually plays nice people and his casting against type as the man corrupted by his wife and a desire for power was inspired. I found the play itself less convincing because I couldn’t believe just how evil Macbeth becomes. Killing off your political rivals is one thing, child murder is quite another.

Richard II: The message of this play is about the consequences of rebellion. In recent history, it has been most commonly referred to when considering the downfall of Margaret Thatcher. The coup against her plunged her party into over a decade of hopeless strife. But even Kevin Spacey in the lead at the Old Vic could not disguise that Richard II himself is not Shakespeare’s most convincing character.

Much Ado About Nothing: The RSC production of 2006 was brilliantly cast and it is hard to see how it could be bettered. The play itself is genuinely funny and definitely worth seeing.

Henry IV Parts One and Two: If the best bits of each were merged into a single play, this would be one of the greats. In two parts, it slightly outstays its welcome. We saw Michael Gambon’s Falstaff at the National Theatre, but he was completely thrown into shadow by Matthew Macfadyen’s Prince Hal. Contains what is probably the rudest joke in Shakespeare’s oeuvre.

The Average:
Twelfth Night: Has its moments, but the humour is too cruel for my taste and the characters unsympathetic. As in many of the comedies, the subplot is better than the plot. Saw it at Stratford in 1993.

The Tempest: One of the last of the plays to be written with some truly opaque verse. The production at the Roundhouse that I saw about five years ago was very good if played a bit too much for the rather humourless laughs that the play provides.

Measure for Measure: The National Theatre production of 2004 (if I recall the year correctly), could scarcely have been bettered but the play itself is rather lightweight. The plot is too dark to be a comedy and the National’s production made a virtue of this by setting it in a fascist state.

The Bad:
Coriolanus: This play is really not very good. My wife and I saw it shortly after we met and it was another year before we could admit to each other how poor it was. Avoid.

While we were talking about this, my wife said she would add the Comedy of Errors (seen at the Globe in 2002) to the list of great plays and Anthony & Cleopatra (seen at Stratford some time in the 1990s with a stella cast), to the list of the bad ones. Anthony, in particular, spends an inordinately long time about dying.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

A Prime Minister we Deserve?

Since Tony Blair left office at the beginning of last summer, British politics has been in a state of flux. Now finally, things seem to be settling down with a slew of polls showing the same thing – a government in serious trouble and the Conservative opposition well ahead for the first time in two decades. Pretty much all the commentariat in the newspapers agree that the reason for this is the dismal performance of the new Prime Minister Gordon Brown. At the moment they are crowing about the way that he is being ignored on his trip to the United States, which is hardly surprising given his lack of star appeal.

But I think the experts get the reasons for Brown’s failure wrong. We often hear journalists and politicians claim that “no one knows what he stands for”; “he has no programme” and “he has no vision.” They imagine that these are the things that the public is telling itself and if only Brown could articulate his beliefs, then all would be well for him. It is true that the Prime Minister is often over-cautious and can appear to be dithering. The election that he failed to call last October after letting everyone think he was about to is a case in point. He wanted to test the water, but it made him look like a man who couldn’t take the plunge. But much of this is media spin. The real problem is more basic and nothing to do with presentation.

Most people in Britain are not poor. While many feel themselves under pressure financially, they also have aspirations to be even better off. They would like a bigger house, a German car and a smaller ipod. Gordon Brown’s public priority has always been for the poor, especially poor children and pensioners. He has, over the last ten years while he was Chancellor of the Exchequer, used vast amounts of public money to try to improve their lot. He may even have succeeded to some extent although you would never guess it from watching the evening news. He boasts of his ‘moral compass’ and there is little doubt he possesses one. He wants a more just society where the have-nots are properly looked after. This is all extremely laudable and Christians should applaud it. But none of this has much appeal for the Middle Classes, especially while the credit crunch is making them nervous.

So I conclude that people know perfectly well what Brown cares about. The problem is, they think it isn’t them. Ask an average person if Brown is on their side and they will say no he isn’t. He may be on the side of the poor, the dispossessed and the needy, but in a democracy that is never enough. Tony Blair could always convince the Middle Classes that he was one of them and had their interests at heart. Brown cannot because he doesn’t.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Wednesday, April 02, 2008

Noble Savages

One of my favourite television shows is Time Team. It consists of a bunch of hirsute archaeologists with three days to dig a site somewhere in the UK. Usually, it is set is a picturesque field in the rain. Despite the three day time limit and the presence of Blackadder’s sidekick Baldrick as the presenter, the archaeologists do try to do a serious job. The show has been a shot in the arm for university archaeology courses and incidentally taught me quite a lot about ancient pottery.

The academics on Time Team are entirely typical of English scholars. Quite learned, a bit politically correct and with a great fondness for beer. If you have watched the show for as long as I have, you get to know their quirks and foibles quite well. For instance, they never talk about religion, but only ritual. I have no idea why this is, but even a medieval monastery is described as a ritual site rather than a religious one.

Another view often articulated on the programme is that the great Bronze Age and Iron Age hill forts and many other smaller enclosures that dot the English countryside were just for show. The enormous ditches and palisades of stakes were, we are often assured, intended to show off the power of the local chieftain rather than provide a defensive position against enemies. For a long time I believed this. After all, the archaeologists on the show are experts. But I’ve now realised it is complete piffle. The evidence for the pacific purpose of the hill forts is that there is no sign that they were ever attacked, let alone reduced. Therefore, archaeologists have convinced themselves that everyone in prehistoric Britain was living peaceably with their neighbours.

A reader of this blog kindly put me on the book that shatters this rural arcadia and dispels the idea that the huge defensive structures were built even though there was nothing to defend against. Lawrence Keeley’s War before Civilization (1994) explains why the hill forts were not attacked and why this is no evidence that the inhabitants had nothing to fear. Instead, he shows how prehistoric life was constantly prey to small scale violence. Raiding parties conducting lightning attacks, killing and pillaging, but could not take the hill forts. So they never even tried. When raiders arrived, the population took shelter in the forts and simply waited for the raiders to go home. There was no question of a war of conquest and so the hill forts were left untouched. There is no evidence of them being attacked because they were an effective defence.

The small scale violence that blighted ancient lives leaves little trace in the archaeology. But we have plenty of evidence for it in more recent contexts, for instance in the New Guinea highlands or the Yucatan in Mexico (as explained by Jared Diamond in Collapse). Unless prehistoric Britons had some genetic trait that made them uniquely nice to each other, we have no reason to think things were any different. The green fields of England were soaked in blood.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Monday, March 31, 2008

Hows and Whys

It is a cliché to say that science answers “how?” questions and religion answers “why?” This particular cliché has been trotted out many times in the newspapers over the last few days as we have witnessed the debate over whether Labour MPs should be given a free vote on the Human Fertilisation and Embryology Bill now before Parliament. Thankfully, sense prevailed and the government will no longer try to force MPs to vote against their consciences.

Oliver Kamm attacks the how/why cliché in his blog but makes the common mistake of not realising that it represents a rather stronger argument than journalists either express or, in all likelihood, understand.

“How questions” are shorthand for things answerable in the objective domain of science. It is an extremely important strength of the scientific method that it is, in its ideal state, heroically objective. Needless to say, this ideal is rarely met and on a philosophical level, collides with the problem of under-determination (whereby it is impossible to say exactly which theory is being demonstrated by a particular set of facts).

“Why questions”, in the same shorthand, do not just stand for the meaning of life. Many atheists, Kamm probably among them, share the view of Douglas Adams’s on such matters. Adams famous answer of 42 for the meaning of life, the universe and everything means no more than he thought it was a silly question. But not all “why” questions are so easily dodged. In particular, atheists are in philosophical difficulty with ethics.

It used to be supposed that science might provide an objective basis for ethics in much the same way as it does for kinematics. These hopes, despite the discovery of a natural ethical architecture, have been dashed by even the latest research. Secular ethics in the West is essentially Christianity with a bit of free love tacked on the side, and it has to be admitted that the free love is causing some trouble. Arguments for embryonic stem cells and animal/human hybrids, which have dominated the discussion of the limits of scientific research, are essentially utilitarian. But no right-thinking person believes utilitarianism is an acceptable basis for ethics. Indeed, nowadays it is rarely brought out except in this rather special case. Interestingly, research by Marc Hauser and others shows that human beings appear to be naturally opposed to utilitarian solutions even if they are divorced from religious concerns.

This leaves the western atheist, whose ethics are Christian at one remove anyway, in rather an odd position when they start to whine about the Church having too much influence in a secular society. They have no alternative ethical system to the one proposed by the church and their historical make-up is largely determined by previous religious decisions anyway. This is why Kamm is wrong to dismiss the how/why dichotomy, at least until he can present an alternative system of ethics that does not rely of plundering religious thought and then claiming atheists thought it up first.

Incidentally, the case for animal/human hybrids has always been a bit sparse. Some scientists have made out that such things will allow them to cure Parkinsons and Alzheimer’s disease. Quite why these particular conditions are the ones mentioned is a bit of a mystery, of course. What said scientists really mean is that they would quite like to have a go at making hybrids and maybe some useful research will come out of it. Like my three year old daughter and her relationship with chocolate, they don’t seem to understand the difference between the words “would like to” and “need.” Given we were previously assured that embryonic stem cells were essential when it turns out they are nothing of the sort, we could have hoped that these people would not try and pull the same stunt twice.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Tuesday, March 18, 2008

The Problem with Attachment Theory

At its base, traditional attachment theory makes the following prediction – take identical children and separate them at birth. One is brought up in a loving family where it forms stable attachments to its adopted or real parents. The other is less fortunate. It is moved through a children’s home, to foster parents, back into care and has no chance to enjoy a proper upbringing. Child one goes to a good school, child two goes to lots of bad schools. According to attachment theory we should see marked differences between the children once they grow into adults. But we don’t.

Even if no one has been cruel enough to do the experiment mentioned above, there have been hundreds of studies of identical twins separated at birth. The researchers were trying to establish how much their personalities and behaviour were affected by nature and how much by nurture. The answer was that identical twins showed a correlation with each other of about 50%. But this figure was almost the same whether they were brought up together by their natural parents, brought up together by adopted parents or separated and brought up apart. Nurture seemed to play no part. Likewise, adopted children did not correlate to their adopted parents anymore than a stranger off the street would. There were some nurture effects while children were still growing up, but once they were adults, these disappeared.

The first person to try to make sense of these results (which were the opposite of what most researchers had expected), was Judith Rich Harris in her book The Nurture Assumption. She was not a psychologist, still less a geneticist and some felt her outsider-status gave them a license to ignore or insult her. Contrary to popular belief, Harris did believe in nurture, but from peers rather than parents. Since attachment theory is generic enough to handle peer-to-peer relationships, some attachment theorists responded to Harris’s criticisms of family-based nurturing effects by shifting their attention from the parlour to the school yard.

The trouble is that there is hardly any evidence for Harris’s peer pressure hypothesis and quite a lot against it. For a start, it is, at first sight, unlikely that parents could have practically no effect and other children such a lot. Work comparing children sent to nursery at six months and those kept at home has found some evidence that the former are more confident and aggressive, but this wears off before they grow up. The Chicago work on school places I referred to a few posts ago suggests it doesn’t matter which school you went to, although this was based on academic results rather than personality. More work is required and we are still hamstrung by having no reliable way to measure intelligence, but things are looking pretty grim for the peer-to-peer hypothesis. The family hypothesis is already dead, if not buried. If peer pressure goes the same way, as seems very likely, there will be nothing left for attachment theory to attach itself to as far as long term outcomes are concerned.

Of course, attachment theory still feels it has something to say about relationships. We are all happier in a stable unit than cast out on our own. Single people are sadder than married people, orphans are less happy than children with both parents. But I’m not sure that we need a special theory to tell us this or that attachment theory’s explanations rise much above the level of psychobabble.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Thursday, March 13, 2008

The Rise of Attachment Theory

On the rare occasions that I make it back from work before her bedtime, my two year old daughter leaps up from whatever she is doing to rush to the door, open it for me and throw herself into my arms. There is no doubt that young children form very strong emotional attachments to their parents, as their parents do to them. Deprived of these attachments, children become miserable but as they grow up they become better able to cope with parental absence. I see mine every few weeks, my wife sees hers, who live in Australia, about once a year.
Observing the difference between a happy child in a stable family with strong bonds to its parents and one without such supports, it is easy to conclude that these early attachments have a lasting impact on the way that we develop. Shortly after the Second World War, John Bowlby, a psychologist, first published his theories about the importance of bring up children in a loving environment where they can form solid relationships with their parents. He developed his work over the years and more recently it has been carried on by Mary Ainsworth, among others.

Attachment theory quickly became the core idea behind child development. It made logical sense and it could be empirically proven. Numerous studies showed that children brought up in loving homes where they could form stable attachments developed into well-adjusted adults. On the other hand, children from broken homes who had been neglected, or were brought up in foster care, had much less successful outcomes. The statistics did not lie and attachment theory was enthroned as a scientific success.

There were a couple of flies in the ointment. Autism was one. In the 1950s, an attachment theorist called Bruno Bettelheim suggested that autism was caused by cold or withdrawn mothers who did not allow their children to form emotional bonds with them. As a result, he claimed that the children withdrew into themselves and became autistic. A generation of mothers was condemned as the reason that their children were handicapped, just adding to their anguish. But eventually it was realised that if one of a mother’s children was autistic but the rest were not, there was little justification for blaming her. Attachment theory, of course, need not be disproved by a single failure, and a veil was drawn over the autism debacle.

By the 1980s, attachment theorists had to deal with another more formidable fly – feminism. Feminists hated the idea that they were supposed to stay at home bringing up baby rather than getting on with their lives. Battle lines were drawn between breasts and bottles, and between stay-at-home mothers and career girls. Political conservatives discovered attachment theory was an excellent argument for traditional lifestyles. But after some hard fighting, this was a battle the feminists won and it is, in general, no longer acceptable to cast aspersions on a woman who places her baby in a nursery at six months so that she can go back to work. But there is no reason why women’s lifestyle choices should cast doubt on attachment theory as a scientific success. Today it remains the first thing that anyone studying child development covers; it is the foundation of the social services system in the UK; and it supports an entire industry of psychologists and councillors. It has only one drawback – it is almost complete rubbish. Next time I’ll explain why.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Wednesday, March 12, 2008

Are Deaf People Disabled?

There is a Human Fertilisation and Embryology Bill going through Parliament at the moment which, among other things, has a clause which forbids couples undergoing IVF from ensuring that their children have the same disability that they do. Actually, the bill does a great deal more and is frankly, a bit of a mess. Other than the deafness question, it hasn’t much made the news although you can read a wildly biased account of what it contains here. The Catholic Church is trying to get MPs a free vote on the most contentious matters.

In a radio interview on the BBC, a deaf person who wanted a deaf child flatly denied to the incredulous presenter that deafness was a disability. The interviewer pointed out that not being able to listen to Beethoven is surely a disadvantage but the deaf person, who had been born with no hearing, denied this too. How could he miss something he had never experienced? There is a distinction to be made between people born deaf and those, like Beethoven, who lost their hearing later it their lives. The later is probably more of a disability than the former, partly because you know what you are missing and partly because you never quite learn to cope. Among deafened people, it is the loss of an ability to hear music that hurts the most. One person I know dreams music and can be quite upset that he cannot continue to listen when he awakes. As the memory of the Sanctus in Mozart’s Great Mass in C Minor fades, he knows he has lost something valuable (although perhaps it is better to have loved and lost then to have never loved at all).

So, for many of us, deliberately engineering that your child is deaf sounds horrendous. I should say that very few deaf people would want to do this, but there is a small militant minority that insist that deafness is simply a facet of who they are, like race or nationality. Deaf people use sign language which is not just a kind of miming but a fully expressive language with the complete set of tenses, parts of speech and other verbal equipment that you find in spoken English. Nor is it just a signed version of spoken language, but as different from them as, say, German from French. With the language comes culture and this is what the deaf militants want to share with their children. I think they are being selfish and putting their own interests ahead of those of their offspring.

However, just if it is wrong to reject an embryo on the grounds that it will grow into a child who can hear, it is also wrong to reject an embryo just because the resulting child is likely to be deaf. Some commentators have missed this point. Would a child thank its parents for choosing that she be deaf, they ask? Probably not, but the choice is a false one. It is not a question of the child being able to hear, but of never existing in the first place with another child with functioning ears in her place. Deafness sucks but it beats death any day.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Tuesday, March 11, 2008

Social Insecurity

Welfare is a huge issue in the US and the UK and I think genetics can help us understand it a bit better.

In general, welfare policies are informed by two different theories. The first is associated with the political right and had its genesis in economics. The idea is that if you get the incentives right, hearts and minds will follow. According to this theory, people make rational choices about what is best for them. If staying of welfare pays then that is what they do. If having a multitude of babies out of wedlock means a cascade of welfare payments, then they get breeding. The right concern themselves with removing the poverty trap where people find that there is no short term way to improve their condition without also worsening their cash flow as the benefits are withdrawn.

Their solution is to create a welfare environment where work and stable families pay. If you withdraw benefits from the lazy, eliminate the poverty trap and put a time limit on how long welfare is payable for, then you encourage people to get on their bikes and find work. You have to keep benefits sufficiently low so that no one in their right mind would be content to live on them. As I have said, these solutions are generally favoured by the political right. They assume that environment is the key and that you can encourage families to stay together through welfare policies. They also assume that stable families cause better lifetime outcomes for the offspring. Furthermore, they assume that people on welfare are quite capable of doing a job where they are paid more than the state is willing to provide.

Many on the left disagree. They say that people do not choose to be on welfare and that those who claim benefits have little choice in the matter. They are not making rational decisions and are not capable of just getting up and finding a job. Girls are not deliberately getting pregnant for material gain or to obtain social housing. The people whose welfare is withdrawn when they don’t find work are precisely the most vulnerable who need help from the rest of society. Clearly, the left reject the idea that people are moulded by their environment and believe that welfare payments should be made on the basis that the recipients cannot help themselves get out of the situation they are in.

So nature versus nurture matters. What’s more, the battle lines are not always drawn where we expect them. In welfare policy, it is the political right who are the nurturists and the left who are the nativists. Who is correct?

On the most basic level, I think the left have the best understanding of the issue. I take it as axiomatic that we must help the less fortunate and cannot leave people destitute. Furthermore, welfare cannot be set at such a low level that it leaves those on benefits without any of the comforts we take for granted. But if we are as generous with benefits as we should be, that inevitably makes them attractive to freeloaders who could be working. Over the years, the number of people who should not be on benefits but have decided it is an easy option has increased markedly. This means that right wing policies do have an effect of reducing the number of social security recipients, but at enormous cost to the core of claimants who have no choice but to stay at the bottom of the pile. The only solution I can see is to try and finely balance welfare policy to minimise the number of scroungers while maximising the benefits available to the needy. We must also accept that some people will always need our help and that a zero welfare policy is not acceptable in a civilised society.

I would also suggest that a lack of appreciation for the genetic roots of human behaviour has meant both left and right are approaching the problem too simplistically.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.

Thursday, March 06, 2008

Getting some Flak over Nature and Nurture

Over the last few months, some correspondents have been unsympathetic towards my contentions that nurture, upbringing, schooling and other environmental factors appear to have much less effect on how children develop than might be expected. I accept that this is a radical idea, but it is also one where the evidence is quite conclusive. But perhaps I have been remiss in presenting that evidence.

A few posts ago, I mentioned studies on the Chicago state school system where it was found that there was no direct correlation between an individual’s results and how ‘good’ the school they went to was. As explained in Freakonomics, that is not quite the full story. The system in Chicago is that you can either accept the local school or, if you are unhappy with it, you can enter the lottery. The academics found that children whose parents entered them into the lottery did better than those who just went to the school to which they were originally allocated – even after allowing for the fact that children entered for the lottery might end up at a better school than otherwise. They also found that middle class parents were much more likely to enter their children for the lottery than just accept the allocation. It was a classic case of middle class children doing better wherever they ended up at school.

So was it the middle class upbringing that led to children performing better or the middle class genes? The academics dug deeper and found that there was not a single factor in the children’s upbringing (whether they were read to at bed time, or there were plenty of books in the house etc etc) that correlated to exam results. It looked like it was genetic.

Here’s another example, taken from Steven Pinker’s The Blank Slate. It is uncontroversial that the children brought up by single parents have worse school results, worse outcomes and are more likely to be single parents themselves. Traditionally, this has been assumed to be due to the lack of a father-figure, poverty or the stress involved in living in a broken family. In other words, it is a classic case of how nurture effects the way people turn out. But when you split the figures up between families where the husband has absconded and where he has died, you get different results. Children brought up by widows do not have worse outcomes than in general and are as likely to stay married. Children brought up by divorced or never-married mothers are more likely to be divorced themselves, do less well at school and have lower lifetime outcomes.

How can you explain this? It is certainly not stress. Upsetting as it is for a father to leave home, it is nothing compared to a bereavement. It isn’t poverty either. Widows are not richer than divorcees. The only correlating factor appears to be the parents themselves, in which case the differences must be caused by genes.

This is not just an academic question. All our social and education policies are based on the idea that nurture matters and is something we can change. It is generally agreed that our policies are not working as they should. I would suggest that much of the reason for this is that they are based on an axiom that is untrue.

Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.