Friday, 23 April 2010

Scientists, doncha love'em?

So many clever people

The Organising Committee of the ISPR 2010 writes –

Dear Andrew Sutton:

As you know, only 8% members of the Scientific Research Society agreed that 'peer review works well as it is.' (Chubin and Hackett, 1990, p.192). 'A recent US Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research.' (Horrobin, 2001). Horrobin concludes that peer review 'is a non-validated charade whose processes generate results little better than does chance.' (Horrobin, 2001) This has been statistically proven and reported by an increasing number of journal editors. But, 'Peer Review is one of the sacred pillars of the scientific edifice' (Goodstein, 2000), it is a necessary condition in quality assurance for Scientific/Engineering publications, and 'Peer Review is central to the organization of modern science… why not apply scientific [and engineering] methods to the peer review process?  (Horrobin, 2001).

This is the purpose of the 2nd International Symposium on Peer Reviewing: ISPR 2010 (, being organized in the context of the SUMMER 4th International Conference on Knowledge Generation, Communication and Management:, KGCM 2010 ( which will be held on June 29th - July 2nd, in Orlando, Florida, USA.


Chubin, D. R. and Hackett E. J. (1990) Peerless Science, Peer Review and U.S. Science Policy; New York, State University of New York Press
Horrobin, D., 2001, Something rotten at the core of science? Trends in Pharmacological Sciences, vol. 22, no 2, February  
Goodstein, D., 2000, How science works, US Federal Judiciary Reference Manual on Evidence, pp. 66-72 (referenced in Hoorobin, 2000)

I just wanted to share this... I do not have anything to submit and could hardly get to Florida to deliver it if I had. Never mind, it is nice of them to keep me in touch.

Barmy boffins

And by the way, in case you are wondering, this is a ‘proper’ academic conference so, were I indeed submitting something, I should have to bear the following in mind (from a footnote to the above letter) –

All submitted papers/abstracts will go through three reviewing processes: (1) double-blind (at least three reviewers), (2) non-blind, and (3) participative peer reviews. These three kinds of review will support the selection process of those papers/abstracts that will be accepted for their presentation at the conference, as well as those to be selected for their publication in JSCI Journal.

So many clever people...

More on this

1 comment:

  1. I came across this before and it made me think a lot. The advancement of science is desirable; it would be great to have foolproof methods to be able to decide what works and what doesn’t. Relying on beliefs and assumptions would put us back to the times when we were convinced that the earth is flat and we’d still be at the miasma theory of disease. The problem is that the current procedure of scientific publication is not foolproof. At all. Those good guys come up with everything trying to make it more reliable, and peer-review is one of these ‘approved’ methods; pity that the majority of scientists themselves don’t believe it works, either. And what are the foolproof ways to separate the ‘barmy’ scientists from the one who’s turning the wheels? Don’t ask the professors themselves: “Ninety-four percent of university professors think they are better at their jobs than their colleagues” (T. Gilovich: How we know what isn’t so). Self-deception leads us to think we’re smarter than the majority of the people and we’re less susceptible to judgmental biases. The incompetent person is not likely to recognize this deficit of theirs—they lack the cognitive skills to be able to do that in the first place. So, does peer review help to weed out the ‘findings’ of the incompetent academe who think they’re smart? It doesn’t seem so; how do we know that the ‘peers’ are more competent and more able to point out the errors? You, who read their work in the scientific journals, may be 100 times more competent and instantly see their errors, but how do you prove that you’re right? They already have their comfort labels: controlled, double blind, published, peer-reviewed, and they don’t have the skills to see their own cognitive obstacles. Hence we have the plethora of ‘scientific studies’ in which the academes haven’t even got to figure out the basic nature of the subject of their study.