Posted tagged ‘Communication’

A Wild Night Out – Uncaged Monkeys in Cambridge

May 17, 2011

I can’t remember where I first heard about Uncaged Monkeys and their visit to Cambridge. I do remember that I had never heard of the Radio 4 comedy Infinite Monkey Cage on which this live show is based. And a good job too – generally I don’t like R4 comedy so this might have put me off.  I just saw the announcement of the line-up headliners: Ben Goldacre, Simon Singh, and Brian Cox, and thought that this muct be a fantastic opportunity to see three top speakers locally and in a great venue (I love the Cambridge Corn Exchange and its odd-shaped roof which invokes so much history). My husband had even less idea than I did what I was dragging him along to, not having bothered to read past “Science show”. I can safely say that we were both pleasantly surprised by what we saw.

Robin Ince was the host and warm-up act for the evening. He is also the main presenter of the radio programme but this meant nothing to me at the time. His comedy was focussed largely on his small child’s understanding of the world and other, less scientific jokes. As a warm-up act he worked well, though I would not go and see a show that sold itself purely on his humour.

Professor Brian Cox was pretty good. I have to confess here to having a personal and irrational  dislike of him, though I do think think that a lot of what he does it good science communication. And he didn’t flick at his hair once during this show, so perhaps someone has told him how annoying that habit was! But he was funny and covered some really hard-core science which almost had me believing that it is worthwhile spending money on the LHC (just not such a big proportion of the science budget) to allow us to find out more, not only about how the universe started, but how this might apply to and affect our everyday lives now.

Dr Ben Goldacre I’ve heard speak before, and read his Bad Science book. While he’d obviously modified his talk for this event, it did not contain a lot that was new to me. I did learn, however, that he started life as an epidemiologist, which perhaps explains his love and deep understanding of statistics. I am a fan and regularly read his column in the Guardian/blog etc. However I do wonder if it is really so well named. At the risk of being pilloried for quoting Bon Jovi songs, Bad Medicine might be a better name for his work as most of it is medical or health-related. Which is fine, he is a doctor and should talk about what he knows. But it would be nice to see someone else covering the rest of science – maybe he could bring in some guest writers.

Simon Singh probably needs no introduction either to anyone reading this. For me personally, his talk was probably the highlight of the show, though I did enjoy all of it. He concentrated on codes and codebreaking, as per an early book of his. However he clearly knows a lot about a lot of things, as seen in the Q&A session. If you’d read his book I suspect you wouldn’t have gained too much from what he said, but I had not. He started his show by busting the myth about music played backwards having secret messages in it, in this case “Stairway to Heaven”, and continued by pointing out that people who find hidden messages in e.g. the Bible are just finding what they already know and want to believe. All good points, and fairly obvious really when you think about it. I am not sure that he would have convinced the message-finders though, as they will believe what they want anyway.

Helen Arney is a self-styled geek rocker, the musical interlude of the evening. Her songs mostly contained jokes for scientists rather than science. I don’t know how much she will be able to sell as I felt that once I’d heard and laughed at the songs once, I wouldn’t need to do so again. But as a slot in this show she was a welcome change of pace and fitted in well with the mood of the evening.

Adam Rutherford is apparently also a TV presenter, amongst other things. I’d never heard of him and was amazed at how young he appeared. His show was entertaining and educational and covered genetics but it seemed unnecessary to poke fun at people from Norfolk in order to get laughs.

Professor Steve Jones  was the “guest” speaker of the evening. As Rutherford admitted that Jones was his ex-supervisor I was not clear why both of them were invited to talk. Jones seemed to be the epitome of a slightly absentminded scientist although he’s clearly spoken in many prestigious quarters and now appears to have taken up broadcasting too. His delivery was more scientific and less comedic than Rutherford’s, which for me personally provided some balance to the evening.
The Q&A session part way through was probably unnecessary. It was in any case only applicable to those on Twitter as this was how questions were posed. And given the mixture of people in the audience I’d say that less than half the audience was Twitter-enabled. The questions were mixed and although it provided some humour, most of it wasn’t of a very high level. Cox was the only one of the questionees to take a very simple question and inject some deep fundamentals as well as clear explanation into his answer.

I may be biased, though I felt that the range of science covered could have been more comprehensive. We had a lot of biology (Jones, Rutherford, Arney), physics (Cox and Ince), maths (Singh) and medicine (Goldacre). But what about chemistry? Perhaps there are no chemical comedians…
Overall I would highly recommend this tour to anyone who has not yet been. Beg, borrow, or steal tickets wherever you can. Individually the speakers would also be worth seeing, but the chance to catch all of them at once should not be missed. I will also be tuning in to the Infinite Monkey Cage in the future, at least to see if it is better than traditional R4 comedy.


Chickens and eggs

March 4, 2011

Which comes first, the love of science or the understanding of it? In my opinion this is a bit of a chicken and egg situation. If you do not appreciate and understand how amazing science is, why would you want to learn more about it? And if you do not learn about it, how can you appreciate its true wonders?

I recently had a discussion in which I defended the importance of scientists going into teaching. I’ve also been doing quite a bit more in the way of public engagement type activity, as some of you may have noticed from my last post about the British Science Association event. I started this because I think it is important to ensure scientific literacy and appreciation in everyone, kids and adults alike. My experience and observations doing it have been great, and they have been thought-provoking.

Many of the comments we’ve received during the science-busking events we’ve done have gone along the lines of “Better than science in school”, “I’m really into science and this is great”, “Why don’t we learn about this in school?” Many adults assumed that the event was aimed at their kids and not for them, because “Science is for kids, it’s something you learn in school”, which is a shame because what we are trying to show is how vital science is for everything that you do, it surrounds us and is integral to our lives. Now you can argue that the busking is not the best way to reach the adults, and this may be right, but the attitude still persists and must, I feel, be addressed.

Science teaching is really important because it contains the message about the significance of science in everything. But the way that science is taught in many schools (especially due to over-zealous health and safety concerns) means that the wonder and amazement is difficult to convey. I’m not an expert in education and I don’t have direct answers to this, but I am sure that is would help if impressionable minds could be impressed with amazing things that science can do. It would also help if parents believed that science was important and conveyed this attitude to their children. It might help if science was not just one lesson in a list of english, maths, french, science, PE, etc. Science comes into all of these subjects and could be demonstrated in all of the lessons rather than as a rather abstract thing to be  learned by rote. There have been various attempts to do this and I am sure that some of them are successful. As I said, I’m not an expert. But you can tell by the state of society today and the media in particular, that many people have no interest in or understanding of science. So what’s going wrong?

Instead of targeting the kids, we could start with the people in charge, those with the power to make a difference. Ask them to promote the importance of science, and the importance of having a future generation as well as a current generation of scientifically literate people. They don’t all need to be scientists, but understanding the scientific method will help in so many ways in different parts of society. But if science continues to be taught in a way that makes it difficult to be enthusiastic about it, none of this will have much effect. We will just be imposing dull lessons on our kids, who will probably then hate science forever.

And how do you convince the policymakers etc. of the importance of something that they have hated since childhood and that only a minority wants to study?

As ever I would love to hear what you think – do we aim to start with the chicken or the egg? Or something entirely different? And what is the most effective way for one person to make a difference in this regard – should I write to my MP (who seems most uninterested in scientific questions in general) or go into teaching? (this is not a serious career-change question as I would make a terrible teacher but as a means to make a difference it certainly has potential).

A quick update of what I have been up to

February 25, 2011

First of all, I’d like to apologise to my readers for the ridiculously long absence. I have been busy – honest. A lot of what I have been doing involved helping to organise things for the British Science Association National Science and Engineering Week – this event that we are running in the Grafton Centre, Cambridge should be very exciting. We have lots of new (to us) experiments on the (loose) theme of communication. Do stop by and see what we are up to if you are in the area.

It being the start of spring, I’ve also been busy in the garden. Sort of science, sort of muddy fun….

And finally, I have written and published several nice stories on various topics (but mainly graphene):

How to spot monolayers – Question: how can you tell if your boron nitride layer is a monolayer or two or three layers overlaid? Answer: with this easy Raman technique developed by graphene kings Andre Geim and Kostya Novoselov’s group.

Gold nanoparticle network growth – metal nanoparticle systems are being used extensively and increasingly in biological, chemical and physical studies, so understanding what makes them tick is really important.

Bionanoelectronics – no Frankenstein –  a neat summary of how bioelectronics is being used today and how it will/could/should revolutionise the world tomorrow.

Graphene tracks for aluminum trains – I thought this was cool. Basically a graphene surface was modified with electronic contacts such that a cluster of Al can be moved along it and even made to turn corners on it. (And an apology for the spelling of aluminium here – Wiley require US spellings so that is what I used – but I particularly dislike this one!)

Speed dating for pharmaceuticals – a computational study that could be really handy for those med chemists trying to find the best way to deliver drugs (or ways to repatent old ones) – it calculates the strength of all hydrogen bonds in the crystal of an active component and a co-crystal partner and comes up with what would make the most likely partners.

Hope you like these, enjoy!

Taking sides – review of debate on science journalism taking sides

September 24, 2010

This was a not-very-well-advertised debate about science journalism,organised by the Times‘ Science Editor Mark Henderson (sorry Mark, due to The Times’ policy of charging for content you don’t get a proper link) at the RI. It was chaired by Fiona Fox of the Science Media Centre and the speakers were Henderson, Ceri Thomas (Editor of the BBC radio 4 Today programme), Prof. Steve Rayner (Oxford, Science and Civilisation) and Ed Yong  (Information Manager at CRUK and famed science writer/blogger).

Enough with the links….down to the content…..

The RI debate format seems pretty constant as it was the same at the last one I attended – each person gets five minutes to state their position, Fox asks some questions then opens the debate up to questions from the floor. Personally I found her method of taking 4 questions from the floor then asking for answers quite irritating as it means most people have forgotten the first question by the time it comes to be answered. Anyhow here are my summaries of the 4 positions of the speakers:

Henderson: The traditional view of impartiality and fairness is impossible to achieve. Correctness/accuracy and transparency are more important than being balanced. It is the journalist’s duty to evaluate competing claims and provide evidence, uncertainties must be acknowledged.

Thomas: Science is no different than any other subject, shouldn’t be treated differently (NB Thomas was the only speaker who was not a specialist science editor/reporter which may have something to do with his position!).  Shouldn’t take sides and it’s ok to put peoplewho are “wrong” on air. Though reporters should take the side of reason and evidence, it is important to remember that these are not the only important things, most peoplemake decisions based on emotions and irrational thoughts so these need to be acknowledged too. It’s important to represent views not “liked” by scientists where these exist to show they are out there and so they can be held to account in public.

Rayner: Mostly thought that the debate was about science policy rather than science. Scientists shouldn’t have the last say in policy debates because the debate is not about the science itself. When scientists are called on to make judgements about subjects outside of their expertise they are no better than any other layperson. Dislikes polarisation such as the portrayal of “climate deniers” as he feels this stifles debate about the real problems and the ability to reach an inbetween position.

Yong: Many reporters are lazy, don’t investigate enough. The term “scientists have claimed…” is a get-out that allows lack of investigation and lack of endorsement. Reporters must provide a context and analysis as if they don’t, in the internet age, someone else will. Shifting the necessity to make the decision onto the reader is tricky because the reader has less resources (and will) to make this decision than the reporter. All choices are subjective including what we choose to write about at all and how it is written. Overuse of quotes and getting others to tell the story for you is a problem. Reporters shouldn’t take sides with a specific scientist, theory, or science, but always take the side of truth. Journalistic practices are not always compatible with this.

This was an interesting debate as ostensibly all of the speakers were on the same side (i.e. take the side of truth) but all had quite different approaches to it. I did disagree with a few of the comments that were thrown out there. For example, Henderson said at one point that if something seems too good to be true it usually is, and it is up to jouralists to get to the bottom of things and find out about this. Which is all well and good but if the work has been peer reviewed (as in the example of the Woo Suk Hwang fraud he used),what makes journalists qualified to discover this when several peer reviewers and trained editors cannot? Because they are not experts in anything, journalists are only as good as their sources. And someone else also made the point that we all know scientists who will say certain things on certain topics, so you can pretty much always find someone who will say what you want to hear.

I also disagreed with Thomas’s position that science should not be treated differently from art or politics. In these cases, opinion and point of view actually shapes the outcome. If enough people think something, this will inform a policy or a perception of quality. But in science, there are specific rules and ways of working that define this and they are not subject to opinion, they just are. Obviously interpretation of results is variable, but  even their interpretation is based on context within science and models etc that non-scientists cannot hope to understand. If you let unscientific minds try and interpret results they won’t know where to start and you end up with the kind of statements that say that you don’t need evidence for something, like God or ghosts or the Holocaust, because you (want to) believe it so it must be true.

Rayner’s insistence through most of the debate that discussions about science in the news are mostly about science policy not science itself was interesting, but I think ultimately wrong-headed. Yes, I agree that this does happen and there is no point only taking a scientists’sviewpoint on whether stem-cell research should continue or what kind of drugs should be legal, because scientists are not equipped to pronounce on the societal concerns and consequences of the science. But they do need to be a part of the debate. If you don’t have a scientist to tell you about their research then how can you hope to anticipate the societal consequences?

There was a long-running point introduced by Fox about what “her Mum” (read non-scientist member of the public) would understand on reading news pieces. Not knowing Fox’s mum, I imaged my equally scientifically illiterate and disinterested Grandma in this position; you can insert your own beloved relative or neighbour for ease of imagination. This person is not interested in investigating something they are told further. They want to be told in words of a few syllables only what the news is, and why it is important, not to be expected to make up their own mind. They need a clear message not a balanced piece as there is a danger they will only read half of a story before boring of it, thus missing the other half of the argument. If they don’t like what they read, they will go somewhere else where they can get what they do like. When faced with this sort of person, “Joe Public”, how realistic is it to publish a balanced piece in which the scientific viewpoint challenges general beliefs, and then expect that the reader can really make up their own mind in an informed way? Discuss.

Another much-discussed point was the differences between different  media. People may expect opinions in blogs and that’s where they go to get opinions; this is where you get communities building up that agree with each other. There may be a place (and I personally think there is) for straight reporting that gives bald facts and doesn’t try to dress it up too much; information rather than propaganda. There may also be a place for the more opinionated commentary on these facts. But it should always be made clear which is which.

For the avoidance of doubt, this blog is a representation of my opinions which I am justifying with facts where possible – if you want pure scientific facts read a journal paper, discard all the interpretation, and hope that the data is not fabricated. Pure facts are pretty hard to come by these days.

Results null but not void

August 31, 2010

There’s been a bit of discussion recently in various forums, perhaps most visibly on Ben Goldacre’s Bad Science blog, about the usefulness or otherwise of null results. I thought I would add my twopennorth to the discussion.

So first of all what is a null result? Mostly when people use this term they are referring to a result that does not tell them anything that they want to know, perhaps it is even a negative result, though I would argue that these two are quite distinct, and that a null result is simply one that does not further the research one way or the other.

In medicine and pharmaceutical trials the null or negative result is clearly of great importance and you can see why companies producing the drugs would be motivated to hide or cover up such results – why would you want to tell the world if the very expensive drug you are hoping to sell to cure cancer is less effective than current treatments? That’s basically saying “don’t buy this drug” and if you are a business that’s not a good advert for your products.

This argument can be extended to anyone with a commercial/financial interest in the outcome of the research (I’ve touched on this in my review of Fiona Godlee’s Sense about Science talk). A way round it might be to make the testers independent of the developers. This is a system that might work in really big industries like pharma, but would not really be feasible for smaller, or niche industries.

I’m thinking of materials development, as a subject close to my heart. I think it is fair to say that if you are an aerospace company buying materials to make your plane with, you are not going to take very seriously the claims of the makers  of any materials you buy in, and will want to do your own tests. A null result means you won’t buy the product, end of story. If you are the company that makes the materials, and your own tests show the material to be worse than previous ones, you will put that project to one side and start again. Time wasted, yes, but probably things learned along the way. Most companies doing research build this in to their research budget. Even pharma companies recognise that only about one in several thousand lead compounds will ever make it to market.

But a world where null and negative results are still of great debate is in academia. As an academic researcher, you get a few grants per year and are expected to produce several publications in reputable journals so that you can continue to receive such grants in the future. The publications show that the grant award had been worthwhile. What is wrong with this?

Well for a start, if your idea doesn’t work as well as you thought it would, you have a problem. You might get a null or or even negative result. What do you do then? All that work and you can’t get a decent publication out of it, which means you can’t get another grant later on. At the very least, you would be encouraged to gloss over the results, carry on to the next thing, change tack. This happened to me during my own PhD. If you were less honest, you might become embroiled in some kind of data fabrication saga, claiming things that were untrue. How else can you salvage anything from those years of work?

The key point here is the phrase “decent publication in reputable journal”, which you need to sustain and continue grants. Most researchers and research agencies judge this by impact factor of the journal, and/or citations to the paper itself. Within this framework there is no room for null or negative results. Why would you spend the time and effort to write up an experiment that did not work out how you hoped, and does not really further understanding? Apart from, of course, to prevent others following the same doomed course of action. This may earn you some brownie points with peers, but no money. So most researchers don’t bother to write up null results. Even if they do bother, publishing them is not straightforward.

In the main journal I used to be Editor of, we used to look for novelty or advance in chemistry or properties of materials. These criteria were used to rule out an awful lot of “low impact” work that was basically “just another” synthesis of a nanomaterial or the like. We knew that these papers would not receive large numbers of citations and so we did not want to publish them in our journal. Most editors would be the same. Null results were obviously not wanted. Negative results would occasionally make it in if they were unexpected or in disagreement with previous work – then there is more chance that people will cite these works.

So it is the journals and their editors who are to blame? Well you could be understood for thinking that. But look deeper. Why do the journals want to publish high impact papers, and exclude those that they believe will not be? Simple really. If my journal does not have a good impact factor and a reasonable amount of content, who will buy it? University and industrial library budgets are being cut all the time and  librarians are looking actively for “unnecessary” journals. How do you decide what is unnecessary? Well normally this is considered to be low impact work, small niche journals, articles that will not be read or cited by their peers. Hmm.

So librarians are to blame? Not really, they are only applying the standards laid out by the academics themselves. The standards imposed by the funding agencies and their ongoing need to be able to measure research outcomes in numerical terms. A null result has a null numerical measure. Null benefit to the world, though? I think not.

Science and the Public Conference – 3rd July

July 17, 2010

This interesting conference organised mainly by Alice Bell from the Imperial College Science Communication department made me realise that until quite recently, most of the “Science Communication” has been done by social scientists rather than by physical scientists. You could discuss for quite a while the pros and cons of this situation (As I see it, Pros: They can be more objective as they are not so close to the science, they generally have better communication skills and are more encouraged to use them, Cons: They may not understand the science properly, can’t cut through jargon and hype so easily, don’t relate so well to people in general and, not being scientists themselves are not used to scientific methodology – more on this later) but, apart from a few star scientist communicators, that is the way things have been. It was interesting that most of the attendees seemed to be scientists, whereas most of the speakers (certainly more than 50 %) seemed to be social scientists. The point that I am trying to make here is that you would expect at a science communication conference that scientists and social scientist science communicators would be able to find a middle ground where they could actually discuss things on terms that everyone can understand. For the most part this did succeed, though I felt that there were one or two notable exceptions.

But first the highlights:

I saw a highly entertaining talk by a chap from Carbon Visuals who is trying to get people to understand amounts and relative sizes without giving numbers. At fist I thought this was going to be akin to New Scientist’s ongoing “this is equivalent to x blue whales” commentary but I was pleasantly surprised. Basically he was trying to put amounts into a context that people can relate to, so for Imperial College students he used a campus map and related CO2 emissions to the size of the buildings, for an office near Kings Cross he used the new St Pancras station building, for a presentation to Londoners he used Trafalgar Square, and so on. Not only did he have some really cool graphics but the idea of getting people to relate something unknown to something they know intimately was a great idea.

There were some interesting presentations about science education too, including the effect of inaccuracies in cartoons (like when Tom from Tom and Jerry bangs his head and forgets who he is, then after another bang on the head all his memories come back – sadly, knocking an amnesiac on the head is unlikely to bring about positive effects in real life) on how and what children learn (depends on their age,apparently, but kids do learn from cartoons and they don’t always realise that they are learning), and a couple of good presentations about how people learn and how to get at what the public’s general attitudes to science are (pretty hard because most people don’t even know what is meant when you ask about “science” and they also confuse the concept of science with the concept of learning about science). I’m not going to go into great detail on any of these but I did find most of the day really interesting and informative, and it gave me a lot to think about.

Now without hurting any feelings, to the part I personally found less useful. There was a keynote lecture by a clearly eminent social scientist who had been studying how social scientists study communication of science (I think I mean the theory of science communication and the methods used) for some years. His first less-than-groundbreaking conclusion was that when you ask questions and get answers that don’t fit with your expected categories of answer, you should not just junk the answers as these may tell you something important about the survey you are doing. To a scientist this is like saying don’t junk the outliers as they may tell you something  about the system you are studying, i.e., blindingly obvious. To me the concept that social scientists may have been ignoring their outliers all this time is quite a scary one.

He then went on to discuss a specific type of what he called science communication which is basically those gadgety toy/model/arty things that are based on science like mousetrap tables and personalised rings made from a culture of your own bone (I was fascinated by the latter idea though I can’t think who would want to do this – it’s a bit like Brangelina allegedly exchanging vials of each other’s blood). He then compared these specific models to the general precepts used by social scientists to engage the public in science, and concluded that the “design” model was a rather passive one which does not really seek to engage or to inform in any real way. I do take issue with the whole of scientists efforts at science communication/engagement being lumped in with these rather esoteric toy systems, and while I agree that the systems he discussed are passive and don’t actively go out to engage or educate people, they are not designed to do that. They are designed to intrigue. Perhaps he should look at some systems that are designed to go out and educate people, and see how these bear up against his fundamental ideas. Oh yes, and he persistently mis-pronounced microbial as micro-bile. Clearly he hasn’t checked his ideas with scientists as they wouldeasily have been able to correct him on this point.

So in conclusion: Yes there is a lot to be gained from scientists and social scientists interacting and trying to work out together what is the best way to communicate science and scientific workings to “the public” (whoever they are). As to whether that is really happening yet… well in some circumstances it is, and in others it is not. Both scientists and social scientists have important skills to bring to the table in this; the scientists bring their understanding, their rigour, their enthusiasm, and the social scientists bring their understanding of how people react, their education and communication skills. This can be a meeting of equals, but without both sides coming together and being prepared to listen to and work with a system that at first will seem quite alien to them, it will simply be a look through a frosted window at someone else’s house, and the details will still be misunderstood.

Sense about Science Annual Lecture – Dr Fiona Godlee

July 13, 2010

Firstly I want to apologise for being offline for so long – we moved house and it took a while to get the internet hooked up again. Then there was all the catching-up with work to do……anyway here I am again finally.

So the Sense about Science Lecture. SaS were kind enough to send me an invite to this when I asked. As an ex-Editor myself I am very interested in discussions about peer-review and its various failings.

Fiona Godlee is the current Editor of the British Medical Journal (BMJ); she has been in the job since 2005. In that time she has clearly had to grapple with numerous cases of ethical misconduct, in fact, going on what she told us, it would be fair to say that the medical and pharmaceutical literature is rife with it.

She discussed the problem of reporting bias – not unethical in itself but still something that skews perceptions and misleads the reader. Basically, people don’t like to report negative results. Drugs that don’t work very well will not make any money, and an experiment that shows nothing or worse, shows a problem, will jeopardise funding or even the job of that particular researcher in many cases. This is particularly true in medical research where the funding is usually coming from large pharmaceutical companies, but there is an element of it in academic research too, even where funding comes purely from the government. All university researchers have limited time and resources, and these are being squeezed at every corner. Papers that can be published in high-impact factor journals will advance a researcher’s career and help secure further funding. But papers that report negative results are rarely accepted for publication in high-impact or indeed reputable journals, as the editors know they will not be cited and would bring down their impact factor. This propagates reporting bias and also means that the experiments that did not work are doomed to be repeated again and again in different labs, as no-one knows that someone else has already tried it. This is a waste of the precious resources that are being so tightly squeezed already.

But Godlee’s lecture concentrated mainly on two perceived threats to science  – the external threat of those who refuse to accept the evidence-based approach to science, and the internal threat of undisclosed conflict of interest, which leads to data suppression, ghost- and guest-writing. In my background of materials, physics and chemistry, these are not such a big issue but neverthless it was interesting to consider.

Where researchers receive a large amount of their funding from industry, as is the case in medical research, these sorts of conflicts can always arise. This is because the researchers have a vested interest in the outcome. Of course you could argue that researchers always have a vested interest in the outcome of their research, as breakthroughs will advance reputation and facilitate additional funding (this could be another whole blog postingin itself). However the peer-review system generally does a reasonable job of picking up most of those who attempt to play the system. But when the vested interest is a large financial one, the pressures become greater. Government-funded researchers will generally only lose that one grant if a project does not work out. The decision about who to award the grant to next time is made again using peer-review and is regulated by committees and procedures which, however much the researchers rail against them, are there for a good reason. If all your money, on the other hand, comes from one or two companies who are focussed on being able to sell their particular drugs to the public in a few years’ time, there is a much greater pressure on you to create and publish positive results only. If the project yields a negative, your chances of receiving future funding from that source are very much reduced, being decided upon by a couple of top executives at that company who will tend to scratch the backs of those who scratch theirs.

Godlee had a few proposals to make to deal with this problem.

1) She wants article-level impact metrics rather than journal-level ones to become the norm. I think this is very sensible and could be applied easily, quickly, and would help a lot. Of course, it would not solve the problem on its own but it would give  a better guideline as to which are the good papers. I don’t think that it would prevent journals from publishing dross though, as some of the dross can still be highly cited.

2) She wants to stop big pharma from directly evaluating its own products (the current committee to approve drugs in the UK is made up of industry representatives) . Again this is sensible in theory though in practice I wonder if you would find that most of those with the necessary expertise to make the decisions are involved in some way with the pharmaceutical industry and there is no-one suitable who is not.

3) She wants to see central industry funding for independent advice and independent drugs trials. If you can get them to agree to to it, this is probably a good idea. Not sure why any company would sign up to this though, unless it is made law. This system is apparently in use in other countries, so I guess they must have laws about it. It would be interesting to see how these came about.

4) She wants to see publication of entire data sets to create better transparency. I’ve touched on my opinions about this before. This really only creates transparency if you are in a position to be able to use the data. It might solve some of the issues but it would create a load more when those who do not properly understand the data get hold of it and start to draw conclusions from it.

5) She would also like to see more investigative journalism. This is always a good thing, but how to promote it? Journalists are busy people, with ever shorter deadlines and ever more stories to write. To produce a good investigation they need to be given time and resources to do this, which means that they will be less productive in terms of numbers. Editors will not be prepared in most cases to countenance this unless it is strongly called for by the readers and subscribers. It is up to us!

The final point Godlee made was simple: The public must question everything. This is not a new idea – the motto of the Royal Society translates as “take nobody’s word for it”. But it is sadly a point that, in today’s society of quick headlines and instant gratification, probably does need making.