Archive for the ‘Publishing’ category

If you can’t beat ’em, join ’em?

November 8, 2010

Someone sent me a blog posting news story a few days back, which was interesting but really depressing in some ways. You can read it for yourselves. What was upsetting was the idea that so many scientists still see the media as something that must be “beaten”. Likewise I am sure that the journalists out there get annoyed when the scientists don’t give them the headlines they need.

I heard a very interesting conversation on the radio between Dorothy Bishop (an academic who was so disgusted with how science is reported by the media that she has started a prize for it) and a journalist (sorry I cannot locate the journalist’s name but she was head of some journalistic association or similar; the programme was on Radio 4). The journalist seemed disgusted with the way that scientists (academics) she had spoken with could not justify in one sentence why tax-payers money was being spent on this research and what tax-payers would get from it.  

I spend my time writing about scientific breakthroughs and why they are important. I usually write about 300-400 words. Even then I write at a level at which the general public would not have a hope of understanding some of what I say; it is aimed at people already literate in science. To justify in one sentence would, I think, be pretty tricky. You could say what the research is ultimately leading towards, of course, but the listener/reader needs to understand that this may not happen any time soon, and not as a direct result of your particular piece of work today. You would need to understand not only the research inside out (which the researchers do) but also how to communicate that with people who fundamentally don’t care about science and don’t understand technical terms. Even with media training, which most academics don’t have much of (though this is on the increase down to places like the Science Media Centre) getting your message across would require some careful thought.

Some research has a direct appeal to the public – a good e.g. of that out in the press at the moment is the “invisibility cloak” stuff. That appeals because it links up with childhood dreams, Harry Potter, etc. You can also see a very direct use of it. However the current research will not enable you to hide underneath an invisibility cloak like Master Potter did. This story has been reported numerous times on the BBC (see Vicky Gill’s stories this year, last year, and another reporter four years ago when the theory was first put forward). Each time an advance has been made, but we are still a long way away from holding the cloak in our hands. But that is how science works.

Journalists have a job to do, to get their story in on time (with very short deadlines) and to sell newspapers, an ever more difficult job in an internet age.  Incremental advances do not headlines make. So they may sex it up, ignore the caveats and anyone who does not respond immediately to questioning. I’m not accusing anyone in particular here  -in fact I think that this is simply the kind of behaviour that the journalistic/editorial system encourages. Journalists are asked to provide entertainment, scientists are out to provide information. Information can be entertaining, but it is not always the fairy story with the happy end we might want it to be.

So you can see why the attitude arises that the one must beat the other, and why this prevails even though scientists do know, by and large, that they should spend more time and effort to communicate their research to the public and media. Changing the way that journalists work would be tricky. It has been suggested that better marketing of science is the answer – personally I think this is part of the solution but not the whole answer. I don’t know what that is, though. Answers on a postcard (or post below). Meanwhile I shall keep on doing my bit in my own little corner of the world.

Results null but not void

August 31, 2010

There’s been a bit of discussion recently in various forums, perhaps most visibly on Ben Goldacre’s Bad Science blog, about the usefulness or otherwise of null results. I thought I would add my twopennorth to the discussion.

So first of all what is a null result? Mostly when people use this term they are referring to a result that does not tell them anything that they want to know, perhaps it is even a negative result, though I would argue that these two are quite distinct, and that a null result is simply one that does not further the research one way or the other.

In medicine and pharmaceutical trials the null or negative result is clearly of great importance and you can see why companies producing the drugs would be motivated to hide or cover up such results – why would you want to tell the world if the very expensive drug you are hoping to sell to cure cancer is less effective than current treatments? That’s basically saying “don’t buy this drug” and if you are a business that’s not a good advert for your products.

This argument can be extended to anyone with a commercial/financial interest in the outcome of the research (I’ve touched on this in my review of Fiona Godlee’s Sense about Science talk). A way round it might be to make the testers independent of the developers. This is a system that might work in really big industries like pharma, but would not really be feasible for smaller, or niche industries.

I’m thinking of materials development, as a subject close to my heart. I think it is fair to say that if you are an aerospace company buying materials to make your plane with, you are not going to take very seriously the claims of the makers  of any materials you buy in, and will want to do your own tests. A null result means you won’t buy the product, end of story. If you are the company that makes the materials, and your own tests show the material to be worse than previous ones, you will put that project to one side and start again. Time wasted, yes, but probably things learned along the way. Most companies doing research build this in to their research budget. Even pharma companies recognise that only about one in several thousand lead compounds will ever make it to market.

But a world where null and negative results are still of great debate is in academia. As an academic researcher, you get a few grants per year and are expected to produce several publications in reputable journals so that you can continue to receive such grants in the future. The publications show that the grant award had been worthwhile. What is wrong with this?

Well for a start, if your idea doesn’t work as well as you thought it would, you have a problem. You might get a null or or even negative result. What do you do then? All that work and you can’t get a decent publication out of it, which means you can’t get another grant later on. At the very least, you would be encouraged to gloss over the results, carry on to the next thing, change tack. This happened to me during my own PhD. If you were less honest, you might become embroiled in some kind of data fabrication saga, claiming things that were untrue. How else can you salvage anything from those years of work?

The key point here is the phrase “decent publication in reputable journal”, which you need to sustain and continue grants. Most researchers and research agencies judge this by impact factor of the journal, and/or citations to the paper itself. Within this framework there is no room for null or negative results. Why would you spend the time and effort to write up an experiment that did not work out how you hoped, and does not really further understanding? Apart from, of course, to prevent others following the same doomed course of action. This may earn you some brownie points with peers, but no money. So most researchers don’t bother to write up null results. Even if they do bother, publishing them is not straightforward.

In the main journal I used to be Editor of, we used to look for novelty or advance in chemistry or properties of materials. These criteria were used to rule out an awful lot of “low impact” work that was basically “just another” synthesis of a nanomaterial or the like. We knew that these papers would not receive large numbers of citations and so we did not want to publish them in our journal. Most editors would be the same. Null results were obviously not wanted. Negative results would occasionally make it in if they were unexpected or in disagreement with previous work – then there is more chance that people will cite these works.

So it is the journals and their editors who are to blame? Well you could be understood for thinking that. But look deeper. Why do the journals want to publish high impact papers, and exclude those that they believe will not be? Simple really. If my journal does not have a good impact factor and a reasonable amount of content, who will buy it? University and industrial library budgets are being cut all the time and  librarians are looking actively for “unnecessary” journals. How do you decide what is unnecessary? Well normally this is considered to be low impact work, small niche journals, articles that will not be read or cited by their peers. Hmm.

So librarians are to blame? Not really, they are only applying the standards laid out by the academics themselves. The standards imposed by the funding agencies and their ongoing need to be able to measure research outcomes in numerical terms. A null result has a null numerical measure. Null benefit to the world, though? I think not.