It is a cliché to say that science answers “how?” questions and religion answers “why?” This particular cliché has been trotted out many times in the newspapers over the last few days as we have witnessed the debate over whether Labour MPs should be given a free vote on the Human Fertilisation and Embryology Bill now before Parliament. Thankfully, sense prevailed and the government will no longer try to force MPs to vote against their consciences.
Oliver Kamm attacks the how/why cliché in his blog but makes the common mistake of not realising that it represents a rather stronger argument than journalists either express or, in all likelihood, understand.
“How questions” are shorthand for things answerable in the objective domain of science. It is an extremely important strength of the scientific method that it is, in its ideal state, heroically objective. Needless to say, this ideal is rarely met and on a philosophical level, collides with the problem of under-determination (whereby it is impossible to say exactly which theory is being demonstrated by a particular set of facts).
“Why questions”, in the same shorthand, do not just stand for the meaning of life. Many atheists, Kamm probably among them, share the view of Douglas Adams’s on such matters. Adams famous answer of 42 for the meaning of life, the universe and everything means no more than he thought it was a silly question. But not all “why” questions are so easily dodged. In particular, atheists are in philosophical difficulty with ethics.
It used to be supposed that science might provide an objective basis for ethics in much the same way as it does for kinematics. These hopes, despite the discovery of a natural ethical architecture, have been dashed by even the latest research. Secular ethics in the West is essentially Christianity with a bit of free love tacked on the side, and it has to be admitted that the free love is causing some trouble. Arguments for embryonic stem cells and animal/human hybrids, which have dominated the discussion of the limits of scientific research, are essentially utilitarian. But no right-thinking person believes utilitarianism is an acceptable basis for ethics. Indeed, nowadays it is rarely brought out except in this rather special case. Interestingly, research by Marc Hauser and others shows that human beings appear to be naturally opposed to utilitarian solutions even if they are divorced from religious concerns.
This leaves the western atheist, whose ethics are Christian at one remove anyway, in rather an odd position when they start to whine about the Church having too much influence in a secular society. They have no alternative ethical system to the one proposed by the church and their historical make-up is largely determined by previous religious decisions anyway. This is why Kamm is wrong to dismiss the how/why dichotomy, at least until he can present an alternative system of ethics that does not rely of plundering religious thought and then claiming atheists thought it up first.
Incidentally, the case for animal/human hybrids has always been a bit sparse. Some scientists have made out that such things will allow them to cure Parkinsons and Alzheimer’s disease. Quite why these particular conditions are the ones mentioned is a bit of a mystery, of course. What said scientists really mean is that they would quite like to have a go at making hybrids and maybe some useful research will come out of it. Like my three year old daughter and her relationship with chocolate, they don’t seem to understand the difference between the words “would like to” and “need.” Given we were previously assured that embryonic stem cells were essential when it turns out they are nothing of the sort, we could have hoped that these people would not try and pull the same stunt twice.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Monday, March 31, 2008
Tuesday, March 18, 2008
The Problem with Attachment Theory
At its base, traditional attachment theory makes the following prediction – take identical children and separate them at birth. One is brought up in a loving family where it forms stable attachments to its adopted or real parents. The other is less fortunate. It is moved through a children’s home, to foster parents, back into care and has no chance to enjoy a proper upbringing. Child one goes to a good school, child two goes to lots of bad schools. According to attachment theory we should see marked differences between the children once they grow into adults. But we don’t.
Even if no one has been cruel enough to do the experiment mentioned above, there have been hundreds of studies of identical twins separated at birth. The researchers were trying to establish how much their personalities and behaviour were affected by nature and how much by nurture. The answer was that identical twins showed a correlation with each other of about 50%. But this figure was almost the same whether they were brought up together by their natural parents, brought up together by adopted parents or separated and brought up apart. Nurture seemed to play no part. Likewise, adopted children did not correlate to their adopted parents anymore than a stranger off the street would. There were some nurture effects while children were still growing up, but once they were adults, these disappeared.
The first person to try to make sense of these results (which were the opposite of what most researchers had expected), was Judith Rich Harris in her book The Nurture Assumption. She was not a psychologist, still less a geneticist and some felt her outsider-status gave them a license to ignore or insult her. Contrary to popular belief, Harris did believe in nurture, but from peers rather than parents. Since attachment theory is generic enough to handle peer-to-peer relationships, some attachment theorists responded to Harris’s criticisms of family-based nurturing effects by shifting their attention from the parlour to the school yard.
The trouble is that there is hardly any evidence for Harris’s peer pressure hypothesis and quite a lot against it. For a start, it is, at first sight, unlikely that parents could have practically no effect and other children such a lot. Work comparing children sent to nursery at six months and those kept at home has found some evidence that the former are more confident and aggressive, but this wears off before they grow up. The Chicago work on school places I referred to a few posts ago suggests it doesn’t matter which school you went to, although this was based on academic results rather than personality. More work is required and we are still hamstrung by having no reliable way to measure intelligence, but things are looking pretty grim for the peer-to-peer hypothesis. The family hypothesis is already dead, if not buried. If peer pressure goes the same way, as seems very likely, there will be nothing left for attachment theory to attach itself to as far as long term outcomes are concerned.
Of course, attachment theory still feels it has something to say about relationships. We are all happier in a stable unit than cast out on our own. Single people are sadder than married people, orphans are less happy than children with both parents. But I’m not sure that we need a special theory to tell us this or that attachment theory’s explanations rise much above the level of psychobabble.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Even if no one has been cruel enough to do the experiment mentioned above, there have been hundreds of studies of identical twins separated at birth. The researchers were trying to establish how much their personalities and behaviour were affected by nature and how much by nurture. The answer was that identical twins showed a correlation with each other of about 50%. But this figure was almost the same whether they were brought up together by their natural parents, brought up together by adopted parents or separated and brought up apart. Nurture seemed to play no part. Likewise, adopted children did not correlate to their adopted parents anymore than a stranger off the street would. There were some nurture effects while children were still growing up, but once they were adults, these disappeared.
The first person to try to make sense of these results (which were the opposite of what most researchers had expected), was Judith Rich Harris in her book The Nurture Assumption. She was not a psychologist, still less a geneticist and some felt her outsider-status gave them a license to ignore or insult her. Contrary to popular belief, Harris did believe in nurture, but from peers rather than parents. Since attachment theory is generic enough to handle peer-to-peer relationships, some attachment theorists responded to Harris’s criticisms of family-based nurturing effects by shifting their attention from the parlour to the school yard.
The trouble is that there is hardly any evidence for Harris’s peer pressure hypothesis and quite a lot against it. For a start, it is, at first sight, unlikely that parents could have practically no effect and other children such a lot. Work comparing children sent to nursery at six months and those kept at home has found some evidence that the former are more confident and aggressive, but this wears off before they grow up. The Chicago work on school places I referred to a few posts ago suggests it doesn’t matter which school you went to, although this was based on academic results rather than personality. More work is required and we are still hamstrung by having no reliable way to measure intelligence, but things are looking pretty grim for the peer-to-peer hypothesis. The family hypothesis is already dead, if not buried. If peer pressure goes the same way, as seems very likely, there will be nothing left for attachment theory to attach itself to as far as long term outcomes are concerned.
Of course, attachment theory still feels it has something to say about relationships. We are all happier in a stable unit than cast out on our own. Single people are sadder than married people, orphans are less happy than children with both parents. But I’m not sure that we need a special theory to tell us this or that attachment theory’s explanations rise much above the level of psychobabble.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Thursday, March 13, 2008
The Rise of Attachment Theory
On the rare occasions that I make it back from work before her bedtime, my two year old daughter leaps up from whatever she is doing to rush to the door, open it for me and throw herself into my arms. There is no doubt that young children form very strong emotional attachments to their parents, as their parents do to them. Deprived of these attachments, children become miserable but as they grow up they become better able to cope with parental absence. I see mine every few weeks, my wife sees hers, who live in Australia, about once a year.
Observing the difference between a happy child in a stable family with strong bonds to its parents and one without such supports, it is easy to conclude that these early attachments have a lasting impact on the way that we develop. Shortly after the Second World War, John Bowlby, a psychologist, first published his theories about the importance of bring up children in a loving environment where they can form solid relationships with their parents. He developed his work over the years and more recently it has been carried on by Mary Ainsworth, among others.
Attachment theory quickly became the core idea behind child development. It made logical sense and it could be empirically proven. Numerous studies showed that children brought up in loving homes where they could form stable attachments developed into well-adjusted adults. On the other hand, children from broken homes who had been neglected, or were brought up in foster care, had much less successful outcomes. The statistics did not lie and attachment theory was enthroned as a scientific success.
There were a couple of flies in the ointment. Autism was one. In the 1950s, an attachment theorist called Bruno Bettelheim suggested that autism was caused by cold or withdrawn mothers who did not allow their children to form emotional bonds with them. As a result, he claimed that the children withdrew into themselves and became autistic. A generation of mothers was condemned as the reason that their children were handicapped, just adding to their anguish. But eventually it was realised that if one of a mother’s children was autistic but the rest were not, there was little justification for blaming her. Attachment theory, of course, need not be disproved by a single failure, and a veil was drawn over the autism debacle.
By the 1980s, attachment theorists had to deal with another more formidable fly – feminism. Feminists hated the idea that they were supposed to stay at home bringing up baby rather than getting on with their lives. Battle lines were drawn between breasts and bottles, and between stay-at-home mothers and career girls. Political conservatives discovered attachment theory was an excellent argument for traditional lifestyles. But after some hard fighting, this was a battle the feminists won and it is, in general, no longer acceptable to cast aspersions on a woman who places her baby in a nursery at six months so that she can go back to work. But there is no reason why women’s lifestyle choices should cast doubt on attachment theory as a scientific success. Today it remains the first thing that anyone studying child development covers; it is the foundation of the social services system in the UK; and it supports an entire industry of psychologists and councillors. It has only one drawback – it is almost complete rubbish. Next time I’ll explain why.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Observing the difference between a happy child in a stable family with strong bonds to its parents and one without such supports, it is easy to conclude that these early attachments have a lasting impact on the way that we develop. Shortly after the Second World War, John Bowlby, a psychologist, first published his theories about the importance of bring up children in a loving environment where they can form solid relationships with their parents. He developed his work over the years and more recently it has been carried on by Mary Ainsworth, among others.
Attachment theory quickly became the core idea behind child development. It made logical sense and it could be empirically proven. Numerous studies showed that children brought up in loving homes where they could form stable attachments developed into well-adjusted adults. On the other hand, children from broken homes who had been neglected, or were brought up in foster care, had much less successful outcomes. The statistics did not lie and attachment theory was enthroned as a scientific success.
There were a couple of flies in the ointment. Autism was one. In the 1950s, an attachment theorist called Bruno Bettelheim suggested that autism was caused by cold or withdrawn mothers who did not allow their children to form emotional bonds with them. As a result, he claimed that the children withdrew into themselves and became autistic. A generation of mothers was condemned as the reason that their children were handicapped, just adding to their anguish. But eventually it was realised that if one of a mother’s children was autistic but the rest were not, there was little justification for blaming her. Attachment theory, of course, need not be disproved by a single failure, and a veil was drawn over the autism debacle.
By the 1980s, attachment theorists had to deal with another more formidable fly – feminism. Feminists hated the idea that they were supposed to stay at home bringing up baby rather than getting on with their lives. Battle lines were drawn between breasts and bottles, and between stay-at-home mothers and career girls. Political conservatives discovered attachment theory was an excellent argument for traditional lifestyles. But after some hard fighting, this was a battle the feminists won and it is, in general, no longer acceptable to cast aspersions on a woman who places her baby in a nursery at six months so that she can go back to work. But there is no reason why women’s lifestyle choices should cast doubt on attachment theory as a scientific success. Today it remains the first thing that anyone studying child development covers; it is the foundation of the social services system in the UK; and it supports an entire industry of psychologists and councillors. It has only one drawback – it is almost complete rubbish. Next time I’ll explain why.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Wednesday, March 12, 2008
Are Deaf People Disabled?
There is a Human Fertilisation and Embryology Bill going through Parliament at the moment which, among other things, has a clause which forbids couples undergoing IVF from ensuring that their children have the same disability that they do. Actually, the bill does a great deal more and is frankly, a bit of a mess. Other than the deafness question, it hasn’t much made the news although you can read a wildly biased account of what it contains here. The Catholic Church is trying to get MPs a free vote on the most contentious matters.
In a radio interview on the BBC, a deaf person who wanted a deaf child flatly denied to the incredulous presenter that deafness was a disability. The interviewer pointed out that not being able to listen to Beethoven is surely a disadvantage but the deaf person, who had been born with no hearing, denied this too. How could he miss something he had never experienced? There is a distinction to be made between people born deaf and those, like Beethoven, who lost their hearing later it their lives. The later is probably more of a disability than the former, partly because you know what you are missing and partly because you never quite learn to cope. Among deafened people, it is the loss of an ability to hear music that hurts the most. One person I know dreams music and can be quite upset that he cannot continue to listen when he awakes. As the memory of the Sanctus in Mozart’s Great Mass in C Minor fades, he knows he has lost something valuable (although perhaps it is better to have loved and lost then to have never loved at all).
So, for many of us, deliberately engineering that your child is deaf sounds horrendous. I should say that very few deaf people would want to do this, but there is a small militant minority that insist that deafness is simply a facet of who they are, like race or nationality. Deaf people use sign language which is not just a kind of miming but a fully expressive language with the complete set of tenses, parts of speech and other verbal equipment that you find in spoken English. Nor is it just a signed version of spoken language, but as different from them as, say, German from French. With the language comes culture and this is what the deaf militants want to share with their children. I think they are being selfish and putting their own interests ahead of those of their offspring.
However, just if it is wrong to reject an embryo on the grounds that it will grow into a child who can hear, it is also wrong to reject an embryo just because the resulting child is likely to be deaf. Some commentators have missed this point. Would a child thank its parents for choosing that she be deaf, they ask? Probably not, but the choice is a false one. It is not a question of the child being able to hear, but of never existing in the first place with another child with functioning ears in her place. Deafness sucks but it beats death any day.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
In a radio interview on the BBC, a deaf person who wanted a deaf child flatly denied to the incredulous presenter that deafness was a disability. The interviewer pointed out that not being able to listen to Beethoven is surely a disadvantage but the deaf person, who had been born with no hearing, denied this too. How could he miss something he had never experienced? There is a distinction to be made between people born deaf and those, like Beethoven, who lost their hearing later it their lives. The later is probably more of a disability than the former, partly because you know what you are missing and partly because you never quite learn to cope. Among deafened people, it is the loss of an ability to hear music that hurts the most. One person I know dreams music and can be quite upset that he cannot continue to listen when he awakes. As the memory of the Sanctus in Mozart’s Great Mass in C Minor fades, he knows he has lost something valuable (although perhaps it is better to have loved and lost then to have never loved at all).
So, for many of us, deliberately engineering that your child is deaf sounds horrendous. I should say that very few deaf people would want to do this, but there is a small militant minority that insist that deafness is simply a facet of who they are, like race or nationality. Deaf people use sign language which is not just a kind of miming but a fully expressive language with the complete set of tenses, parts of speech and other verbal equipment that you find in spoken English. Nor is it just a signed version of spoken language, but as different from them as, say, German from French. With the language comes culture and this is what the deaf militants want to share with their children. I think they are being selfish and putting their own interests ahead of those of their offspring.
However, just if it is wrong to reject an embryo on the grounds that it will grow into a child who can hear, it is also wrong to reject an embryo just because the resulting child is likely to be deaf. Some commentators have missed this point. Would a child thank its parents for choosing that she be deaf, they ask? Probably not, but the choice is a false one. It is not a question of the child being able to hear, but of never existing in the first place with another child with functioning ears in her place. Deafness sucks but it beats death any day.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Tuesday, March 11, 2008
Social Insecurity
Welfare is a huge issue in the US and the UK and I think genetics can help us understand it a bit better.
In general, welfare policies are informed by two different theories. The first is associated with the political right and had its genesis in economics. The idea is that if you get the incentives right, hearts and minds will follow. According to this theory, people make rational choices about what is best for them. If staying of welfare pays then that is what they do. If having a multitude of babies out of wedlock means a cascade of welfare payments, then they get breeding. The right concern themselves with removing the poverty trap where people find that there is no short term way to improve their condition without also worsening their cash flow as the benefits are withdrawn.
Their solution is to create a welfare environment where work and stable families pay. If you withdraw benefits from the lazy, eliminate the poverty trap and put a time limit on how long welfare is payable for, then you encourage people to get on their bikes and find work. You have to keep benefits sufficiently low so that no one in their right mind would be content to live on them. As I have said, these solutions are generally favoured by the political right. They assume that environment is the key and that you can encourage families to stay together through welfare policies. They also assume that stable families cause better lifetime outcomes for the offspring. Furthermore, they assume that people on welfare are quite capable of doing a job where they are paid more than the state is willing to provide.
Many on the left disagree. They say that people do not choose to be on welfare and that those who claim benefits have little choice in the matter. They are not making rational decisions and are not capable of just getting up and finding a job. Girls are not deliberately getting pregnant for material gain or to obtain social housing. The people whose welfare is withdrawn when they don’t find work are precisely the most vulnerable who need help from the rest of society. Clearly, the left reject the idea that people are moulded by their environment and believe that welfare payments should be made on the basis that the recipients cannot help themselves get out of the situation they are in.
So nature versus nurture matters. What’s more, the battle lines are not always drawn where we expect them. In welfare policy, it is the political right who are the nurturists and the left who are the nativists. Who is correct?
On the most basic level, I think the left have the best understanding of the issue. I take it as axiomatic that we must help the less fortunate and cannot leave people destitute. Furthermore, welfare cannot be set at such a low level that it leaves those on benefits without any of the comforts we take for granted. But if we are as generous with benefits as we should be, that inevitably makes them attractive to freeloaders who could be working. Over the years, the number of people who should not be on benefits but have decided it is an easy option has increased markedly. This means that right wing policies do have an effect of reducing the number of social security recipients, but at enormous cost to the core of claimants who have no choice but to stay at the bottom of the pile. The only solution I can see is to try and finely balance welfare policy to minimise the number of scroungers while maximising the benefits available to the needy. We must also accept that some people will always need our help and that a zero welfare policy is not acceptable in a civilised society.
I would also suggest that a lack of appreciation for the genetic roots of human behaviour has meant both left and right are approaching the problem too simplistically.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
In general, welfare policies are informed by two different theories. The first is associated with the political right and had its genesis in economics. The idea is that if you get the incentives right, hearts and minds will follow. According to this theory, people make rational choices about what is best for them. If staying of welfare pays then that is what they do. If having a multitude of babies out of wedlock means a cascade of welfare payments, then they get breeding. The right concern themselves with removing the poverty trap where people find that there is no short term way to improve their condition without also worsening their cash flow as the benefits are withdrawn.
Their solution is to create a welfare environment where work and stable families pay. If you withdraw benefits from the lazy, eliminate the poverty trap and put a time limit on how long welfare is payable for, then you encourage people to get on their bikes and find work. You have to keep benefits sufficiently low so that no one in their right mind would be content to live on them. As I have said, these solutions are generally favoured by the political right. They assume that environment is the key and that you can encourage families to stay together through welfare policies. They also assume that stable families cause better lifetime outcomes for the offspring. Furthermore, they assume that people on welfare are quite capable of doing a job where they are paid more than the state is willing to provide.
Many on the left disagree. They say that people do not choose to be on welfare and that those who claim benefits have little choice in the matter. They are not making rational decisions and are not capable of just getting up and finding a job. Girls are not deliberately getting pregnant for material gain or to obtain social housing. The people whose welfare is withdrawn when they don’t find work are precisely the most vulnerable who need help from the rest of society. Clearly, the left reject the idea that people are moulded by their environment and believe that welfare payments should be made on the basis that the recipients cannot help themselves get out of the situation they are in.
So nature versus nurture matters. What’s more, the battle lines are not always drawn where we expect them. In welfare policy, it is the political right who are the nurturists and the left who are the nativists. Who is correct?
On the most basic level, I think the left have the best understanding of the issue. I take it as axiomatic that we must help the less fortunate and cannot leave people destitute. Furthermore, welfare cannot be set at such a low level that it leaves those on benefits without any of the comforts we take for granted. But if we are as generous with benefits as we should be, that inevitably makes them attractive to freeloaders who could be working. Over the years, the number of people who should not be on benefits but have decided it is an easy option has increased markedly. This means that right wing policies do have an effect of reducing the number of social security recipients, but at enormous cost to the core of claimants who have no choice but to stay at the bottom of the pile. The only solution I can see is to try and finely balance welfare policy to minimise the number of scroungers while maximising the benefits available to the needy. We must also accept that some people will always need our help and that a zero welfare policy is not acceptable in a civilised society.
I would also suggest that a lack of appreciation for the genetic roots of human behaviour has meant both left and right are approaching the problem too simplistically.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Thursday, March 06, 2008
Getting some Flak over Nature and Nurture
Over the last few months, some correspondents have been unsympathetic towards my contentions that nurture, upbringing, schooling and other environmental factors appear to have much less effect on how children develop than might be expected. I accept that this is a radical idea, but it is also one where the evidence is quite conclusive. But perhaps I have been remiss in presenting that evidence.
A few posts ago, I mentioned studies on the Chicago state school system where it was found that there was no direct correlation between an individual’s results and how ‘good’ the school they went to was. As explained in Freakonomics, that is not quite the full story. The system in Chicago is that you can either accept the local school or, if you are unhappy with it, you can enter the lottery. The academics found that children whose parents entered them into the lottery did better than those who just went to the school to which they were originally allocated – even after allowing for the fact that children entered for the lottery might end up at a better school than otherwise. They also found that middle class parents were much more likely to enter their children for the lottery than just accept the allocation. It was a classic case of middle class children doing better wherever they ended up at school.
So was it the middle class upbringing that led to children performing better or the middle class genes? The academics dug deeper and found that there was not a single factor in the children’s upbringing (whether they were read to at bed time, or there were plenty of books in the house etc etc) that correlated to exam results. It looked like it was genetic.
Here’s another example, taken from Steven Pinker’s The Blank Slate. It is uncontroversial that the children brought up by single parents have worse school results, worse outcomes and are more likely to be single parents themselves. Traditionally, this has been assumed to be due to the lack of a father-figure, poverty or the stress involved in living in a broken family. In other words, it is a classic case of how nurture effects the way people turn out. But when you split the figures up between families where the husband has absconded and where he has died, you get different results. Children brought up by widows do not have worse outcomes than in general and are as likely to stay married. Children brought up by divorced or never-married mothers are more likely to be divorced themselves, do less well at school and have lower lifetime outcomes.
How can you explain this? It is certainly not stress. Upsetting as it is for a father to leave home, it is nothing compared to a bereavement. It isn’t poverty either. Widows are not richer than divorcees. The only correlating factor appears to be the parents themselves, in which case the differences must be caused by genes.
This is not just an academic question. All our social and education policies are based on the idea that nurture matters and is something we can change. It is generally agreed that our policies are not working as they should. I would suggest that much of the reason for this is that they are based on an axiom that is untrue.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
A few posts ago, I mentioned studies on the Chicago state school system where it was found that there was no direct correlation between an individual’s results and how ‘good’ the school they went to was. As explained in Freakonomics, that is not quite the full story. The system in Chicago is that you can either accept the local school or, if you are unhappy with it, you can enter the lottery. The academics found that children whose parents entered them into the lottery did better than those who just went to the school to which they were originally allocated – even after allowing for the fact that children entered for the lottery might end up at a better school than otherwise. They also found that middle class parents were much more likely to enter their children for the lottery than just accept the allocation. It was a classic case of middle class children doing better wherever they ended up at school.
So was it the middle class upbringing that led to children performing better or the middle class genes? The academics dug deeper and found that there was not a single factor in the children’s upbringing (whether they were read to at bed time, or there were plenty of books in the house etc etc) that correlated to exam results. It looked like it was genetic.
Here’s another example, taken from Steven Pinker’s The Blank Slate. It is uncontroversial that the children brought up by single parents have worse school results, worse outcomes and are more likely to be single parents themselves. Traditionally, this has been assumed to be due to the lack of a father-figure, poverty or the stress involved in living in a broken family. In other words, it is a classic case of how nurture effects the way people turn out. But when you split the figures up between families where the husband has absconded and where he has died, you get different results. Children brought up by widows do not have worse outcomes than in general and are as likely to stay married. Children brought up by divorced or never-married mothers are more likely to be divorced themselves, do less well at school and have lower lifetime outcomes.
How can you explain this? It is certainly not stress. Upsetting as it is for a father to leave home, it is nothing compared to a bereavement. It isn’t poverty either. Widows are not richer than divorcees. The only correlating factor appears to be the parents themselves, in which case the differences must be caused by genes.
This is not just an academic question. All our social and education policies are based on the idea that nurture matters and is something we can change. It is generally agreed that our policies are not working as they should. I would suggest that much of the reason for this is that they are based on an axiom that is untrue.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Saturday, March 01, 2008
The Retiring Richard Dawkins
So, it’s official. Professor Richard Dawkins, the Charles Simonyi Professor for the Public Understanding at Oxford University is to retire in September. I would not counsel breathing a sigh of relief. He is now likely to have even more time for his forays into subjects of which he knows nothing. Actually, any academic duties he had do not seem to have unduly occupied him over the last decade and so I doubt whether we will see much difference in his output.
Oxford is already advertising for a successor so we can happily speculate about who the new occupant of the chair will be. Actually, this is a waste of time and I will reveal at the end of this post who will definitely be getting the job. First, the happy speculation.
Of course, I’d love it if they could tempt Steven Pinker over from Harvard. But they can’t. The trouble is that any big-name American will be being paid way over the paltry £50,000 or so that Oxford can offer. All those dreaming spires do count for something to our American friends, but I fear that Pinker would expect a salary beyond the means of the impoverished UK higher education sector.
Many of the other big names in popular science are now too old to move to a new chair. This rules out Paul Davies of the pompous popular physics books that don’t make any sense, while Peter (Poisonous) Atkins who has just retired himself. Stephen Hawking is both too old and probably wouldn’t be interested anyway. Evolutionary biologist Steve Jones is 63 but would have been an admirable choice otherwise. Oxford may feel that they don’t want another biologist, but the field is a bit too narrow for them to be choosy. Of course, the greatest popularisers of science are not just old, but dead. Sagan, Feynman and Gould stand in a pantheon quite separate from today’s breed. Admittedly, Sagan was wrong about almost everything, but he made astronomy sexy and launched thousands of scientific careers.
Science writer Matt Ridley has a relevant PhD and some books to his name. He is also looking for a new job as he has just been ousted as chairman of Northern Rock after leading that bank to ruin. It is unlikely that Oxford would want anyone so rightwing, though. Being a failure in business is unlikely to count against him. The trouble is that most science writers, like Matt Ridley before he took up banking, are journalists. The biggest selling popular science writer is Bill Bryson who, we can only hope, is not academic enough for the post.
So, enough of who won’t be appointed. Who will get the job? At 58, she’s getting on a bit too, but you would never know it from measuring the length of her skirt. The new Professor will be Susan Greenfield the neurologist, baroness and director of the Royal Institution in London. Oxford born and bred, she is telegenic, clever, has no embarrassing religious beliefs and has a very high public profile. Perfect.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Oxford is already advertising for a successor so we can happily speculate about who the new occupant of the chair will be. Actually, this is a waste of time and I will reveal at the end of this post who will definitely be getting the job. First, the happy speculation.
Of course, I’d love it if they could tempt Steven Pinker over from Harvard. But they can’t. The trouble is that any big-name American will be being paid way over the paltry £50,000 or so that Oxford can offer. All those dreaming spires do count for something to our American friends, but I fear that Pinker would expect a salary beyond the means of the impoverished UK higher education sector.
Many of the other big names in popular science are now too old to move to a new chair. This rules out Paul Davies of the pompous popular physics books that don’t make any sense, while Peter (Poisonous) Atkins who has just retired himself. Stephen Hawking is both too old and probably wouldn’t be interested anyway. Evolutionary biologist Steve Jones is 63 but would have been an admirable choice otherwise. Oxford may feel that they don’t want another biologist, but the field is a bit too narrow for them to be choosy. Of course, the greatest popularisers of science are not just old, but dead. Sagan, Feynman and Gould stand in a pantheon quite separate from today’s breed. Admittedly, Sagan was wrong about almost everything, but he made astronomy sexy and launched thousands of scientific careers.
Science writer Matt Ridley has a relevant PhD and some books to his name. He is also looking for a new job as he has just been ousted as chairman of Northern Rock after leading that bank to ruin. It is unlikely that Oxford would want anyone so rightwing, though. Being a failure in business is unlikely to count against him. The trouble is that most science writers, like Matt Ridley before he took up banking, are journalists. The biggest selling popular science writer is Bill Bryson who, we can only hope, is not academic enough for the post.
So, enough of who won’t be appointed. Who will get the job? At 58, she’s getting on a bit too, but you would never know it from measuring the length of her skirt. The new Professor will be Susan Greenfield the neurologist, baroness and director of the Royal Institution in London. Oxford born and bred, she is telegenic, clever, has no embarrassing religious beliefs and has a very high public profile. Perfect.
Click here to read the first chapter of God's Philosophers: How the Medieval World Laid the Foundations of Modern Science absolutely free.
Subscribe to:
Posts (Atom)