Archive for August 2010

Results null but not void

August 31, 2010

There’s been a bit of discussion recently in various forums, perhaps most visibly on Ben Goldacre’s Bad Science blog, about the usefulness or otherwise of null results. I thought I would add my twopennorth to the discussion.

So first of all what is a null result? Mostly when people use this term they are referring to a result that does not tell them anything that they want to know, perhaps it is even a negative result, though I would argue that these two are quite distinct, and that a null result is simply one that does not further the research one way or the other.

In medicine and pharmaceutical trials the null or negative result is clearly of great importance and you can see why companies producing the drugs would be motivated to hide or cover up such results – why would you want to tell the world if the very expensive drug you are hoping to sell to cure cancer is less effective than current treatments? That’s basically saying “don’t buy this drug” and if you are a business that’s not a good advert for your products.

This argument can be extended to anyone with a commercial/financial interest in the outcome of the research (I’ve touched on this in my review of Fiona Godlee’s Sense about Science talk). A way round it might be to make the testers independent of the developers. This is a system that might work in really big industries like pharma, but would not really be feasible for smaller, or niche industries.

I’m thinking of materials development, as a subject close to my heart. I think it is fair to say that if you are an aerospace company buying materials to make your plane with, you are not going to take very seriously the claims of the makers  of any materials you buy in, and will want to do your own tests. A null result means you won’t buy the product, end of story. If you are the company that makes the materials, and your own tests show the material to be worse than previous ones, you will put that project to one side and start again. Time wasted, yes, but probably things learned along the way. Most companies doing research build this in to their research budget. Even pharma companies recognise that only about one in several thousand lead compounds will ever make it to market.

But a world where null and negative results are still of great debate is in academia. As an academic researcher, you get a few grants per year and are expected to produce several publications in reputable journals so that you can continue to receive such grants in the future. The publications show that the grant award had been worthwhile. What is wrong with this?

Well for a start, if your idea doesn’t work as well as you thought it would, you have a problem. You might get a null or or even negative result. What do you do then? All that work and you can’t get a decent publication out of it, which means you can’t get another grant later on. At the very least, you would be encouraged to gloss over the results, carry on to the next thing, change tack. This happened to me during my own PhD. If you were less honest, you might become embroiled in some kind of data fabrication saga, claiming things that were untrue. How else can you salvage anything from those years of work?

The key point here is the phrase “decent publication in reputable journal”, which you need to sustain and continue grants. Most researchers and research agencies judge this by impact factor of the journal, and/or citations to the paper itself. Within this framework there is no room for null or negative results. Why would you spend the time and effort to write up an experiment that did not work out how you hoped, and does not really further understanding? Apart from, of course, to prevent others following the same doomed course of action. This may earn you some brownie points with peers, but no money. So most researchers don’t bother to write up null results. Even if they do bother, publishing them is not straightforward.

In the main journal I used to be Editor of, we used to look for novelty or advance in chemistry or properties of materials. These criteria were used to rule out an awful lot of “low impact” work that was basically “just another” synthesis of a nanomaterial or the like. We knew that these papers would not receive large numbers of citations and so we did not want to publish them in our journal. Most editors would be the same. Null results were obviously not wanted. Negative results would occasionally make it in if they were unexpected or in disagreement with previous work – then there is more chance that people will cite these works.

So it is the journals and their editors who are to blame? Well you could be understood for thinking that. But look deeper. Why do the journals want to publish high impact papers, and exclude those that they believe will not be? Simple really. If my journal does not have a good impact factor and a reasonable amount of content, who will buy it? University and industrial library budgets are being cut all the time and  librarians are looking actively for “unnecessary” journals. How do you decide what is unnecessary? Well normally this is considered to be low impact work, small niche journals, articles that will not be read or cited by their peers. Hmm.

So librarians are to blame? Not really, they are only applying the standards laid out by the academics themselves. The standards imposed by the funding agencies and their ongoing need to be able to measure research outcomes in numerical terms. A null result has a null numerical measure. Null benefit to the world, though? I think not.

Advertisements

The ins and outs of peer review

August 23, 2010

Throughout the whole saga of the UEA email debacle I noticed how little understanding there seemed to be in the media or the general public of how peer review works and what it can achieve. In that particular case a good understanding of peer review could have led most people to realise that there was at least a good possibility that nothing dodgy was going on at all at UEA, as I have written before. But even within science there is debate about peer review, how it works, whether it works, if it should be changed. So I thought that as an ex-editor of a peer-reviewed journal it was about time I put my own personal views forth, as well as hoping to educate those less familiar with the topic.

Firstly I had better define what is meant by a peer in this context. Academics are usually the ones subjected to peer review so peers are other academics or occasionally industrialists researching related or similar subjects. This fact in itself brings up one of the criticisms of the peer review system – namely, these people are often in competition so how fair is it allowing them to judge the quality of their competitors’ work?

Peer review is used not only in journal publications but also to decide upon the award of financial grants e.g. by the government funding agencies such as the EPSRC. I’m going to use examples from journal publishing but you can see how the principles apply equally to both.

In brief, here’s how it (should) work: Researcher does research, writes it up accurately and submits it to a journal for peer review. Journal editor sends paper to two or more peers, who provide their opinions (reports)  on how good/relevant/useful the research is and on how it can be improved. Journal editor then makes a decision based on these opinions about the suitability of the work for their particular journal, and passes the comments of the peers (referees) along with his/her decision to accept/reject/ask for revisions on to the author.

Depending on the journal and the area of science you are working in, the qualities being looked for will change as will the ability to appeal any decision made, but broadly speaking any editor will look for the work to be a) original i.e. not done before and b) ethically sound (not stolen, plagiarised, harming animals unecessarily, etc.), as well as c) in keeping with the expectations of the readers of the journal in terms of what kind of research they expect to read there. Of course some would claim that there is also d) will this enhance my journal citation record? to consider, though I would argue that this falls into c) as readers will cite work that they find appealing and relevant to them.

All things being fair and equal, the peer reviewers should provide an unbiased report that explains what is new, why it is important, how this leads on from previous work etc. The journal editor, not being an expert in everything, is more or less dependent on the reviewer comments. Some editors are researchers in their own rights, which means they have expertise in certain fields but can never have it in all fields. Editors develop a feeling for how much they can trust certain individuals and how much emphasis to put on any single report. Editors have the final say, so they must also be incorruptible. This may sound like a hard quality to find, but the point is that an editor who is partisan or seen to be partisan will lose the respect of their community and no-one will want to publish in their journal. Being seen to be partisan will therefore destroy the journal as well as the editor’s reputation. Therefore it is on the impartiality of the editors that the fairness or otherwise of the peer review system relies.

It also relies on the reviewers first and foremost to give fair and impartial advice. To this end often reviewers are promised anonymity so that their names are not revealed to the paper authors. This should mean that they can say exactly what they think without fear of recriminations. I have spoken with many many authors who believe that they are able to identify their reviewers for a particular paper. 9 times out of 1o they are incorrect. And I have lost count of the number of times I have been asked “Why didn’t you use the referees I suggested, they know about this area and the referees you picked clearly don’t” (always for a paper I have rejected on referee advice). At least half of the time, I HAD used the suggested referees. This shows you that anonymity does, at least to an extent, offer the protection and fairness it promises.

Misconduct isn’t always detected by this method – as evidenced by high profile, systemic cases of this such as Marc Hauser, Woo Suk Hwang, Jan-Hendrik Schoen, etc. But it sometimes is – these are the many cases that the public and even the scientists do not see.

However there are flaws to the system – no-one thinks it is perfect. Peer review is simply (for most people) the lesser of the known evils. And no-one has really come up with a satisfactory method that is better. And so it persists. There are variants on what I have described, the so-called single-blind peer review (single because only one side of the equation is anonymous  – the reviewers know who the author is and this could affect their judgement). Firstly there is double-blind peer review. I don’t actually know of anyone who uses this as a large-scale, functioning system. I am told that it is virtually impossible to achieve, as most authors will give themselves away, if not by their writing style then by their chosen references (most people will tend to self-reference). Even if this could be avoided, because research is often presented at conferences before it is published, identification of the authors would still be possible.

Another system was trialled by Nature (their findings can be read here); that of open peer review. Here peers are asked to give their opinions without anonymity. While this is a system used by several medical journals including the renowned BMJ, Nature’s trial did not meet with an overwhelming welcome and the process was abandoned. It seems that scientific researchers were more comfortable with the devil they knew, as Nature found that from many disciplines, papers were not even submitted to the trial.

So why do people dislike the system so much? In my opinion, a large part of their unhappiness arises simply because an anonymous reviewer has “rejected” their work. The only system that could improve on this would be to publish everything in the place to which it is originally submitted. And hopefully most of you can see the problem with that idea. But there are other, more valid reasons to dislike peer review as it stands. Decisions are based usually on the opinions of two or three selected people. If a subject is controversial (as in climate change!) then the outcome of the review process will depend to a large extent on the choice of the referees, and thus on the editor’s personal opinion about the work (remembering that editors are suuposed to be impartial and may not be experts) and their familiarity with the field. If I get a controversial sounding paper I can easily facilitate its publication by selecting reviewers who I know to be in agreement with the views expressed. Or I can prevent it being published by selecting reviewers who I know will disagree with it. Or, if I am not familiar with the field, I might do either of these simply by accident, and not realise what I am responsible for. None of these options are really ideal. For me, the fairest solution here would be to send the paper to pro and counter referees and ask them to assess the paper as fairly as they can, then base my decision on their advice, the way they express it, what I know of those people, and necessary further consultations with non-experts who have an interest in the case e.g. a board member for the journal. If the decision is still not clear, an argument (and this is one that I have personally made) would be to publish the paper and let the community decide what to make of it. This might mean a few papers are published that later prove to be wrong, but until it can be clearly decided if this is the case, the benefit of doubt might be the fairest option. Such a course of action assumes that the editor is familiar with the community involved, and does not have any strong beliefs about the matter his/herself (or is able to suppress these). And is primarily concerned with fair treatment rather than with citations. (Publication of a controversial paper might result in increased citations because later authors cite it to agree/disagree with it, but then again it might not.) All of these are qualities that are probably quite rare in editors these days, but in my opinion they are the most important ones to look for when choosing your journal.

I also had a longstanding discussion with one particular author who claimed that he would not publish with a certain publisher because they did not treat his work fairly. Instead he preferred another publisher who apparently did treat him fairly. Now think about this carefully, it goes back to my initial comment about why people dislike peer review. I happened to have knowledge of both publishers and the treatment that author received at both of them, and I can state categorically that the publisher that actually treated him more fairly was in fact the one he would not publish with, because they dared to reject his work when advice said that it was not up to scratch; in other words, they treated him just like everyone elseand took each paper on its relative merits. The other publisher presumably took the view that keeping him happy was more important than publishing a couple of dud papers, as he would produce many more that were not duds and would prefer to publish there in future. They therefore gave him preferential treatment that a young researcher starting out would not receive, simply because of who he was.  Is this fair? No, but it is what happens in life so perhaps it is not surprising that publishing should be any different; we are all judged by our past mistakes and triumphs. I would expect an employer to prefer to employ a candidate with a lot of positive experience on their CV rather than a complete beginner or someone with only a history of failures. And yet peer review, in it’s finest and fairest form, asks us to take each paper on its merits and without considering the history or future of the author. Should we be using a system that does not reflect true life with which to make decisions that will affect lives?

Ultimately peer review revolves around the editor and/or publisher of the work and it is these people/places that should be under scrutiny if the system is to be improved, as well as the authors and reviewers.

I expect that anyone who has managed to get to the end of this little lot will have something to say about it. I hope you will comment, and I welcome the discussion.

Wet weather wear and nanosprings going for gold

August 17, 2010

Just an update on a couple of stories I wrote recently for Chemistry World. Here are links to a piece about a nice JACS paper on gold nanowires that are coiled up into ordered coils  by means of encapsulation in a polymer micelle, and a lovely J. Mater. Chem. (excellent journal!) paper about directional water transport within a porous fabric. These were two of my favourite papers that I have seen in a while, science-wise (I was saddened that the quality of copy-editing in the latter paper was somewhat sub-standard as I didn’t feel it did either the author or the journal justice).

Chem World – Methane all lined up

August 5, 2010

Here is another piece I wrote for Chemistry World on the vibrations of methane and how these can affect reactivity towards a surface. The original work appeared in Science.

Open all areas – thoughts on OA publishing

August 4, 2010

I’ve had quite a few conversations about open access publishing recently, which is odd because I would have thought that I would have this conversation less now that I am not an editor. Funny how things work out.

Perhaps the reason why this topic has come up more recently is because I was working in chemistry publishing, which apparently has the lowest take-up rate of open access (OA) of any discipline (sorry, not sure where I read this so can’t provide any verification of it – anyone who can that would be welcome – my experience is certainly that it is less popular in chemistry than in other related sciences). Now I’m trying to take a broader overview of science in general it becomes more of an issue.

  So what is OA publishing? Basically it means that an article, when published, is free for everyone and anyone who has internet access to read and maybe download. There are various models for doing this. One is that advertisers pay to appear on the site, and no-one else has to pay to publish anything on there. All OA sites can be peer-reviewed or non-peer-reviewed; for ease of discussion I am going to stick to peer-reviewed sites as anything else is effectively not much more credible than, frankly, a blog (yes that includes mine), i.e. it is unsubstantiated, unchecked opinion. You can discuss this if you like, but for me that’s another blog post, another time. Maybe I’ll get around to it soon, if anyone is interested.

An extension of this type of model is the one whereby a government funds the publisher directly so that no-one using it has to pay. PLoS is an example of this model. Note that a source of money is still needed or the publisher would not be able to operate the site.

Another model of OA is that those who wish to publish pay for the paper to be made accessible to everyone (as opposed to just subscribers) when the article is accepted for publication. Many reputable publishers, including those in chemistry such as RSC and ACS, use this model alongside the subscription-only model in a so-called hybrid system; that is, authors are offered the choice to pay only at acceptance of their article. If they don’t choose to pay then the article is accessible to subscribers of that journal only. The cost of funding this would then come out of an author’s research grant, rather than out of library funds, which is where traditional subscriptions come from usually.

Depending on your exact definition of OA, you could also include things like pre-print servers within open access; these are repositories for articles and discussions about them before the article is published, but the article often remains on the pre-print server in some form even after publication. A popular pre-print server I have come across a lot in my work is arXiv, which is run by Cornell University Library for specific communities such as physics, maths, etc. I’m not sure how this is funded but presumably it doesn’t require so much funding as it is so author-driven and no value (formatting, peer review, enhanced HTML content, etc) is added to the content before it appears.

So far so good then, hopefully you understand how the different models can work. And that if value is to be added, a source of finance is required. Even if no value is added, most researchers want to have their work, once published, held on a secure server that they know will be there “forever”. Even those who use pre-print servers generally still strive to get their work published eventually. And then there is the question of the mark of quality provided by peer review and publication in a respected journal.

There is a movement out there that says that as research is funded by taxpayers, then taxpayers aka the general public should have access to it. Again I think this is part of a larger discussion about who should have access to data – everyone wants to or just those qualified to understand it and use it correctly? But I can certainly see the argument for greater availability of information. Strictly speaking, of course, this means that only research funded by the UK government would be available to UK residents. They would still have to pay to access US-funded research. Not quite what this group are after, probably, as the majority of the world’s research is carried out outside the UK, though arguably the UK may lead the field in some areas.

Then there are some companies and organisations that are quite keen on OA. The Wellcome Trust is a notable example of this. They actually fund research as well, and they do make extra funds available to researchers to allow them to publish their work on an OA basis. But what about those more commercially oriented companies? They may or may not fund some academic research. They certainly would not allow any of their company secrets to be published in the open literature, pay-to-access or not. But it would be of great benefit for them if other people’s research in fields linked to their own were made freely accessible – they would save a small fortune in journal subscriptions, and would not have to pay any publication fees as they do not publish their own research anyway. Perhaps you can see why they would be keen on OA?

My opinion is that if funders want work to be made publicly available then they need to provide the money to allow this to happen, directly to the researchers who can tie the money to the work in question. Otherwise they can’t really object when charges are made for access to the information.

Partly this can be achieved through government research funders allocating a publication budget to sit alongside the research one. This should not be given directly to any single publisher (this will create a monopoly whereby people funded by that government can only publish with a certain publisher or small group of publishers) but should be made available to the researchers or universities themselves.

Another approach to combat the exploitation of OA information by big corporations would be for a government to impose some sort of research tax on any company within its jurisdiction admitting to performing research and development (R&D). This money could then be used to pay the publishers and authors whose work the companies are using. This would not be a popular choice, and I doubt it is one that most governments would hurry to introduce. But where else is the money going to come from?