Posted tagged ‘Journals’

Who do you trust? And why?

March 21, 2011

“In God we trust”, or so they say. Nice, if you truly believe that there is some kind of all-knowing, all-powerful being out there who has your best interests at heart, and who can send you signs telling you what is going on. If you don’t believe any of this (and I don’t), then who can you trust? And where can you go to get information that you know is accurate?

Not this blog, that’s for sure.

Not that I would intentionally lie to you, I want to make that clear. But this blog gives my opinion, and those of others who comment on it. Some of this will be backed up by fact. But if you want real, true, hard facts, a blog is the wrong place to go looking for them.

Ok, so you don’t go to a blog. But maybe you look for the answer on the internet? Use a famous search engine or ask a question in a forum. Maybe you look up the answer in a book, or ask a friend. You could ask a teacher, a doctor, a librarian, a scientist or other expert, depending on what your question is about. You could watch a TV programme about it. Or stop someone on the street and ask them.

What I am trying to get at is that obviously there are very many places you could go to get answers to your question. Some of these are probably more trustworthy that others – would you take the answer given to you by a passer-by over that of a teacher? Probably not. But most people would tend to believe their friend over a random stranger. Even if, unbeknownst to them, the random stranger might actually be a top expert in that very subject, whereas their drinking pal is not. It’s probably human nature, we spend time with people we like, we trust them more. We surround ourselves with people who we like and who have similar backgrounds, training, and beliefs to ourselves. And we trust these people over strangers. Which in the end means we trust our own judgement over that of others.

I’ve had countless conversations about various “non-scientific” groups such as climate change deniers and religious groups. Some members of these groups claim that belief is more important than what you can tell from the evidence. This is an inherently unscientific claim as all good scientists know that your hypothesis must be testable and you arrive at it by making educated guesses (inferences) based on your observations. If there is no evidence there is no  inference and no hypothesis to test. But it is not necessarily wrong – there may be things happening in the world for which we (currently) have no reliable evidence. Personally though, I would tend to regard ideas for which there is evidence as being more likely to be correct than beliefs for which there is no evidence.  

Another important side of this is whether you can or should believe someone when they tell you something. Governments tell their people something and expect them to believe and obey – like the Japanese government right now telling the people it is all safe and not to panic. Advertisers tell you that their product is best. Scientists tell you not to believe in ghosts. Your doctor tells you to cut down on fat. Your friend tells you it’s all a big consipracy and not to believe any of them. So who do you believe, and why?

Some people or agencies have earned our respect previously, by doing good work or by making pronouncements that have subsequently been proven to be truthful (particular charities). Some have a more mixed history (governments). Some we just want to believe because the alternative is too awful (climate change deniers).

As a scientist I want to engender trust in science and other scientists. I believe that, in general, what is peer-reviewed and published represents the best of our knowledge about a subject. (I say in general because, as many of us know, peer review and publication do not guarantee trustworthiness – think cold fusion, etc). I put my trust in the community-based checking mechanisms that ensure that only the truth is put out there. If I see a news story about science, I ask “where has this come from, where are the papers published?” and until I know the answer, I reserve judgement. Well mainly, anyway. As a human, if I see a story I want to believe, and think is likely, I will probably believe it anyway, especially if it comes from a source I trust. Except the BBC on April Fool’s Day.

Advertisements

A quick update of what I have been up to

February 25, 2011

First of all, I’d like to apologise to my readers for the ridiculously long absence. I have been busy – honest. A lot of what I have been doing involved helping to organise things for the British Science Association National Science and Engineering Week – this event that we are running in the Grafton Centre, Cambridge should be very exciting. We have lots of new (to us) experiments on the (loose) theme of communication. Do stop by and see what we are up to if you are in the area.

It being the start of spring, I’ve also been busy in the garden. Sort of science, sort of muddy fun….

And finally, I have written and published several nice stories on various topics (but mainly graphene):

How to spot monolayers – Question: how can you tell if your boron nitride layer is a monolayer or two or three layers overlaid? Answer: with this easy Raman technique developed by graphene kings Andre Geim and Kostya Novoselov’s group.

Gold nanoparticle network growth – metal nanoparticle systems are being used extensively and increasingly in biological, chemical and physical studies, so understanding what makes them tick is really important.

Bionanoelectronics – no Frankenstein –  a neat summary of how bioelectronics is being used today and how it will/could/should revolutionise the world tomorrow.

Graphene tracks for aluminum trains – I thought this was cool. Basically a graphene surface was modified with electronic contacts such that a cluster of Al can be moved along it and even made to turn corners on it. (And an apology for the spelling of aluminium here – Wiley require US spellings so that is what I used – but I particularly dislike this one!)

Speed dating for pharmaceuticals – a computational study that could be really handy for those med chemists trying to find the best way to deliver drugs (or ways to repatent old ones) – it calculates the strength of all hydrogen bonds in the crystal of an active component and a co-crystal partner and comes up with what would make the most likely partners.

Hope you like these, enjoy!

Lucky dip

November 25, 2010

Two new stories of mine on hybrid electrolytes and a new way to dye biodegradable polymers, have appeared on the Chemistry World website. And a couple on Materials Views on nanoplasmonics, nanomemory, and super strong nanocontacts.Quite a mix!

Logic gates take the strain and various surface modifications

September 14, 2010

Some more of my work has appeared on Materials Views….

A story on a paper by ZL Wang and co-workers on smart logic gates that can be operated by simply bending the substrate. This story made it to the MaterialsViews newsletter as headline – it’s a nice piece of work but I would like to think that my write-up also had something to do with this!

A piece on single layers of quantum dots arranged on a surface, and another on stripey patterns on a surface by the use of combined top-down and bottom-up approaches.

To be honest I wrote these so long ago I can’t remember much about them but I think they were pretty good papers. The piezotronic switching logic gate I remember slightly better and this was pretty cool. Enjoy.

Small things on surfaces.

September 3, 2010

A couple more of my pieces are now live on the MaterialsViews website, Nano Channel. Here you can read about work to exploit all 3 dimensions of a patterned photoresist from the ubiquitous Whitesides group, and also a method for focussed nanopatterning. Both, basically, discuss how to make small things on surfaces.

Results null but not void

August 31, 2010

There’s been a bit of discussion recently in various forums, perhaps most visibly on Ben Goldacre’s Bad Science blog, about the usefulness or otherwise of null results. I thought I would add my twopennorth to the discussion.

So first of all what is a null result? Mostly when people use this term they are referring to a result that does not tell them anything that they want to know, perhaps it is even a negative result, though I would argue that these two are quite distinct, and that a null result is simply one that does not further the research one way or the other.

In medicine and pharmaceutical trials the null or negative result is clearly of great importance and you can see why companies producing the drugs would be motivated to hide or cover up such results – why would you want to tell the world if the very expensive drug you are hoping to sell to cure cancer is less effective than current treatments? That’s basically saying “don’t buy this drug” and if you are a business that’s not a good advert for your products.

This argument can be extended to anyone with a commercial/financial interest in the outcome of the research (I’ve touched on this in my review of Fiona Godlee’s Sense about Science talk). A way round it might be to make the testers independent of the developers. This is a system that might work in really big industries like pharma, but would not really be feasible for smaller, or niche industries.

I’m thinking of materials development, as a subject close to my heart. I think it is fair to say that if you are an aerospace company buying materials to make your plane with, you are not going to take very seriously the claims of the makers  of any materials you buy in, and will want to do your own tests. A null result means you won’t buy the product, end of story. If you are the company that makes the materials, and your own tests show the material to be worse than previous ones, you will put that project to one side and start again. Time wasted, yes, but probably things learned along the way. Most companies doing research build this in to their research budget. Even pharma companies recognise that only about one in several thousand lead compounds will ever make it to market.

But a world where null and negative results are still of great debate is in academia. As an academic researcher, you get a few grants per year and are expected to produce several publications in reputable journals so that you can continue to receive such grants in the future. The publications show that the grant award had been worthwhile. What is wrong with this?

Well for a start, if your idea doesn’t work as well as you thought it would, you have a problem. You might get a null or or even negative result. What do you do then? All that work and you can’t get a decent publication out of it, which means you can’t get another grant later on. At the very least, you would be encouraged to gloss over the results, carry on to the next thing, change tack. This happened to me during my own PhD. If you were less honest, you might become embroiled in some kind of data fabrication saga, claiming things that were untrue. How else can you salvage anything from those years of work?

The key point here is the phrase “decent publication in reputable journal”, which you need to sustain and continue grants. Most researchers and research agencies judge this by impact factor of the journal, and/or citations to the paper itself. Within this framework there is no room for null or negative results. Why would you spend the time and effort to write up an experiment that did not work out how you hoped, and does not really further understanding? Apart from, of course, to prevent others following the same doomed course of action. This may earn you some brownie points with peers, but no money. So most researchers don’t bother to write up null results. Even if they do bother, publishing them is not straightforward.

In the main journal I used to be Editor of, we used to look for novelty or advance in chemistry or properties of materials. These criteria were used to rule out an awful lot of “low impact” work that was basically “just another” synthesis of a nanomaterial or the like. We knew that these papers would not receive large numbers of citations and so we did not want to publish them in our journal. Most editors would be the same. Null results were obviously not wanted. Negative results would occasionally make it in if they were unexpected or in disagreement with previous work – then there is more chance that people will cite these works.

So it is the journals and their editors who are to blame? Well you could be understood for thinking that. But look deeper. Why do the journals want to publish high impact papers, and exclude those that they believe will not be? Simple really. If my journal does not have a good impact factor and a reasonable amount of content, who will buy it? University and industrial library budgets are being cut all the time and  librarians are looking actively for “unnecessary” journals. How do you decide what is unnecessary? Well normally this is considered to be low impact work, small niche journals, articles that will not be read or cited by their peers. Hmm.

So librarians are to blame? Not really, they are only applying the standards laid out by the academics themselves. The standards imposed by the funding agencies and their ongoing need to be able to measure research outcomes in numerical terms. A null result has a null numerical measure. Null benefit to the world, though? I think not.

Wet weather wear and nanosprings going for gold

August 17, 2010

Just an update on a couple of stories I wrote recently for Chemistry World. Here are links to a piece about a nice JACS paper on gold nanowires that are coiled up into ordered coils  by means of encapsulation in a polymer micelle, and a lovely J. Mater. Chem. (excellent journal!) paper about directional water transport within a porous fabric. These were two of my favourite papers that I have seen in a while, science-wise (I was saddened that the quality of copy-editing in the latter paper was somewhat sub-standard as I didn’t feel it did either the author or the journal justice).