Posted tagged ‘Academia’

SCAN 2010 – Synthesis and Characterization in Nanomaterials Workshop and School

October 19, 2010

I’ve been on holiday, hence the long break from posting. Actually I was in Turkey, one of my favourite places, and the first part of the trip was not a holiday but work, of sorts.

An old friend invited me to speak at a conference she was organising, along with some colleagues. They wanted someone to provide a view on publishing in nanoscience, which I was happy to provide. Copies of my talk available upon request to blog@sciencecarol.com

However I’m sure you’re not that interested in what I had to say, but rather in the meeting itself. It was SCAN 2010 and covered all kinds of nanomaterials; their synthesis, characterisation, properties, and applications. The scope ranged from catalysis using transition metal nanoparticles to bioinspired functional surfaces, in depth STM (by which I mean detailed not deep – haha surface scientist joke ūüėČ ) and electron microscopy studies, and self-assembly of polymers and polymeric 2D structures. I’ve rarely been to such a broad-ranging meeting but I have to say that this was also one of the most enjoyable I have been to (and I am not just saying that). A coming-together of disciplines (right across chemistry, spectroscopy, and physics) is appropriate for a topic like nanomaterials,which do cross borders, but to do it in a small and friendly way is unusual and delightful.

Personal science highlights (please don’t be offended if I didn’t pick you – I enjoyed every talk but the nature of highlights is to select only a few for discussion):

  • From the very first session and Prof. Saim √Ėzkar, I learned what I was doing in my PhD when I was adding “palladium zero” catalyst to my Suzuki cross-coupling reaction (as an aside I was pleased to see that Prof. Akira Suzuki won a share in a Nobel prize for this important reaction)¬†.
  • Dr Marleen Kamperman’s eloquent explanation of the biomimicry of gecko’s feet or how to become spiderman.
  • The drive to study real catalysts under real conditions, as demonstrated by, amongst others, Dr Emrah Ozensoy and Dr Alex Goguet.
  • Prof. Kimoon Kim’s superb final plenary lecture summarising a lifetime’s worth of work on 2D polymers from cucurbitril to make fundamental and¬†also applicable materials advances

I also enjoyed countless interesting conversations with delegates about topics as varied as open-access publishing, impact factors, the predominance of women in Turkish chemistry departments and, of course, on mutual friends.

Bilkent University is, I believe, regarded as the best chemistry department within Turkey and the Times Higher Education Supplement ranked the University as a whole as one of the top 200 in the world. It is easy to believe that this is true, seeing the quality and variety of science coming out of it. What is really impressive is that this achievement comes on the back of far fewer resources than most western universities are able to resort to. Though recent investments have, for example, enabled the chemistry department to buy its own NMR spectrometer, until a couple of years ago this service that most European scientists take for granted was not readily available in Bilkent for routine characterisation. This makes the level and amount of science  being pursued at Bilkent all the more impressive to me.

The workshop and school were also a chance for the three organizers (D√∂n√ľŇü, Emrah, and Erman) to demonstrate the famous Turkish hospitality, and they excelled. The food was fantastic and plentiful, the students attentive, and the atmosphere throughout the workshop inclusive and inquisitive as befitting a meeting of scientific minds.

I’m interested to know what other delegates’ personal highlights were and welcome, as always, any thoughts on what I have said here.

Results null but not void

August 31, 2010

There’s been a bit of discussion recently in various forums, perhaps most visibly on Ben Goldacre’s Bad Science blog, about the usefulness¬†or otherwise of null results. I thought I would add my twopennorth to the discussion.

So first of all what is a null result? Mostly when people use this term they are referring to a result that does not tell them anything that they want to know, perhaps it is even a negative result, though I would argue that these two are quite distinct, and that a null result is simply one that does not further the research one way or the other.

In medicine and pharmaceutical trials the null or negative result is clearly of great importance and you can see why companies producing the drugs would be motivated to hide or cover up such results – why would you want to tell the world if the very expensive¬†drug you are hoping to sell to cure cancer is less effective than current treatments? That’s basically saying “don’t buy this drug” and if you are a business that’s not a good advert for your products.

This argument can be extended to anyone with a commercial/financial interest in the outcome of the research (I’ve touched on this in my review of Fiona Godlee’s Sense about Science talk). A way round it might be to make the testers independent of the developers. This is a system that might work in really big industries like pharma, but would not really be feasible for smaller, or¬†niche industries.

I’m thinking of materials development, as a subject close to my heart.¬†I think it is fair to say that if you are an aerospace company buying materials to make your plane with, you are not going to take very seriously the claims of the makers¬†¬†of any materials you buy in, and will want to do your own tests. A null result means you won’t buy the product, end of story. If you are the company that makes the materials, and your own tests show the material to be worse than previous ones, you will put that project to one side and start again. Time wasted, yes, but probably things learned along the way. Most companies doing research build this in to their research budget. Even pharma companies recognise that only about one in several thousand lead compounds will ever make it to market.

But a world where null and negative results are still of great debate is in academia. As an academic researcher, you get a few grants per year and are expected to produce several publications in reputable journals so that you can continue to receive such grants in the future. The publications show that the grant award had been worthwhile. What is wrong with this?

Well for a start, if your idea doesn’t work as well as you thought it would, you have a problem. You might get a null or or even negative result. What do you do then? All that work and you can’t get a decent publication out of it, which means you can’t get another grant later on. At the very least, you would be encouraged to gloss over the results, carry on to the next thing, change tack. This happened to me during my own PhD. If you were less honest, you might become embroiled in some kind of data fabrication saga, claiming things that were untrue. How else can you salvage anything from those years of work?

The key point here is the phrase “decent publication in reputable journal”, which you need to sustain and continue grants. Most researchers and research agencies¬†judge this by impact factor of the journal, and/or citations to the paper itself. Within this framework there is no room for null or negative results. Why would you spend the time and effort to write up an experiment that did not work out how you hoped, and does not really further understanding? Apart from, of course, to prevent others following the same doomed course of action. This may earn you some brownie points with peers, but no money. So most researchers don’t bother to write up null results. Even if they do bother, publishing them is not straightforward.

In the main journal I used to be Editor of, we used to look for novelty or advance in chemistry or properties of materials. These criteria were used to rule out an awful lot of “low impact” work that was basically “just another” synthesis of a nanomaterial or the like. We knew that these papers would not receive large numbers of citations and so we did not want to publish them in our journal. Most editors would be the same. Null results were obviously not wanted. Negative results would occasionally make it in if they were unexpected or in disagreement with previous work – then there is more chance that people will cite these works.

So it is the journals and their editors who are¬†to blame? Well you could be understood for thinking that. But look deeper. Why do the journals want to publish high impact papers, and exclude those that they believe will not be? Simple really. If my journal does not have a good impact factor and a reasonable amount of content, who will buy it? University and industrial library budgets are being cut all the time and ¬†librarians are looking actively for “unnecessary” journals. How do you decide what is unnecessary? Well normally this is considered to be low impact work, small niche journals, articles that will not be read or cited by their peers. Hmm.

So librarians are to blame? Not really, they are only applying the standards laid out by the academics themselves. The standards imposed by the funding agencies and their ongoing need to be able to measure research outcomes in numerical terms. A null result has a null numerical measure. Null benefit to the world, though? I think not.

Open all areas – thoughts on OA publishing

August 4, 2010

I’ve had quite a few conversations about open access publishing recently, which is odd because I would have thought that I would have this conversation less now that I am not an editor. Funny how things work out.

Perhaps the reason why this topic has come up more recently is because I was working in chemistry publishing, which apparently has the lowest take-up rate of open access¬†(OA) of any discipline (sorry, not sure where I read this so can’t provide any verification of it – anyone who can that would be welcome – my experience is certainly that it is less popular in chemistry than in other related sciences). Now I’m trying to take a broader overview of science in general it becomes more of an issue.

¬†¬†So what is OA publishing? Basically it means that an article, when published, is free for everyone and anyone who has internet access to read and maybe download. There are various models for doing this. One is that advertisers pay to appear on the site, and no-one else has to pay to publish anything on there. All OA sites can be peer-reviewed or non-peer-reviewed; for ease of discussion I am going to stick to peer-reviewed sites as anything else is effectively not much more credible than, frankly, a blog (yes that includes mine), i.e. it is unsubstantiated, unchecked¬†opinion. You can discuss this if you like, but for me that’s another blog post, another time. Maybe I’ll get around to it soon, if anyone is interested.

An extension of this type of model is the one whereby a government funds the publisher directly so that no-one using it has to pay. PLoS is an example of this model. Note that a source of money is still needed or the publisher would not be able to operate the site.

Another model of OA¬†is that those who wish to publish pay for the paper to be made accessible to everyone (as opposed to just subscribers) when the article is accepted for publication. Many reputable publishers, including those in chemistry such as RSC and ACS, use this model alongside the subscription-only model in a so-called hybrid system; that is, authors are offered the choice to pay only at acceptance of their article. If they don’t choose to pay then the article is accessible to subscribers of that journal only. The cost of funding this would then come out of an author’s research grant, rather than out of library funds, which is where traditional subscriptions come from usually.

Depending on your exact definition of OA, you could also include things like pre-print servers within open access; these are repositories for articles and discussions about them before the article is published, but the article often remains on the pre-print server in some form even after publication. A popular pre-print server I have come across a lot in my work is arXiv, which is run by Cornell University Library for¬†specific¬†communities such as physics, maths, etc. I’m not sure how this is funded but presumably it doesn’t require so much funding as it is so author-driven and no value (formatting, peer review, enhanced HTML content, etc) is added to the content before it appears.

So far so good then, hopefully you understand how the different models can work. And that if value is to be added, a source of finance is required. Even if no value is added, most researchers want to have their work, once published, held on a secure server that they know will be there “forever”. Even those who use pre-print servers generally still strive to get their work published eventually. And then there is the question of the mark of quality provided by peer review and publication in a respected journal.

There is a movement out there that says that as research is funded by taxpayers, then taxpayers aka the general public should have access to it. Again I think this is¬†part of¬†a larger discussion about who should have access to data – everyone wants to or just those qualified to understand it and use it correctly? But I can certainly see the argument for greater availability of information. Strictly speaking, of course, this means that only research funded by the UK government would be available to UK residents. They would still have to pay to access US-funded research. Not quite what this group are after, probably, as the majority of the world’s research is carried out outside the UK, though arguably the UK may lead the field in some areas.

Then there are some companies and organisations that are quite keen on OA. The Wellcome Trust is a notable example of this. They actually fund research as well, and they do make extra funds available to researchers to allow them to publish their work on an OA basis. But what about those more commercially oriented companies? They may or may not fund some academic research. They certainly would not allow any of their company secrets to be published in the open literature, pay-to-access or not. But it would be of great benefit for them if other people’s research in fields linked to their own were made freely accessible – they would save a small fortune in journal subscriptions, and would not have to pay any publication fees as they do not publish their own research anyway. Perhaps you can see why they would be keen on OA?

My opinion is that if funders want work to be made publicly available then they need to provide the money to allow this to happen, directly to the researchers who can tie the money to the work in question. Otherwise they can’t really object when charges are made for access to the information.

Partly this can be achieved through government research funders allocating a publication budget to sit alongside the research one. This should not be given directly to any single publisher (this will create a monopoly whereby people funded by that government can only publish with a certain publisher or small group of publishers) but should be made available to the researchers or universities themselves.

Another approach to combat the exploitation of OA information by big corporations would be for a government to impose some sort of research tax on any company within its jurisdiction admitting to performing research and development (R&D). This money could then be used to pay the publishers and authors whose work the companies are using. This would not be a popular choice, and I doubt it is one that most governments would hurry to introduce. But where else is the money going to come from?

The Climate Files

June 15, 2010

Now I have been itching to write something on this subject since the whole “Climategate” UEA email scandal happened. I had my own views on the whole thing, and actually they have turned out to be pretty much correct. I thought that there really was no scandal and that no-one at UEA had behaved wrongly. The leaked comments about stopping papers from being published and “re-defining what peer-review literature is”, seemed to me to be simply researchers who disagree with one point of view and think it to be wrong, trying to keep this view from propagating. The commments about hitting one of the opponents, though unfortunate, were meant in jest and were not intended for the public eye. In fact, none¬† of the emails were intended for consumption by anyone other than those people to whom they were addressed.

Of course what has come out is not quite as simple as that, but attending an event at the Royal Institution last night (15th June) has finally given me the chance to find¬† about more about this so-called scandal, and to write about it. The event was called The Climate Files, and was evidently a vehicle for one of the main journalists involved in the saga, The Guardian‘s Fred Pearce, to promote a new book he had written. Whether it was successful in this is a question I may answer later.

The debate was split into several sections. First, Pearce had the chance to describe his experiences and ideas on the subject. This half hour or so was follwed by two further speakers, each with their own differing experiences and responses to Pearce’s comments; Dr Myles Allen from University of Oxford, who is working on climate change modelling, and how scientific research informs policy, and Dr Adam Corner from Cardiff University, who is a psychologist looking at the comunication of climate change. Pearce then gave a brief response to their arguments, before the debate was opened up to questions from the floor.

The views of the individuals involved in the formal debate can be summed up as follows:

Pearce – Data, in particular the data at the centre of this debate, the instrument record data curated by Phil Jones at UEA and now held by the Met Office, should be publicly available. In general, the public should have access to information that affects them and that has been (if only partially) funded by public money. Jones’ main error was in his liberal interpretation of the Freedom of Information Act, and in encouraging his colleagues to break this also by deleting emails etc. Questions are raised about the peer review system.

Allen – The whole scandal didn’t really affect the data and journalists generally don’t understand the science. Even those who did deliberately misquoted the academics involved to grab headlines. The peer review system works fine. Those who were going to do something useful with the data already did have access to it and it was only the people anticipated to be troublemakers who were not “allowed” to see it. This situation is acceptable.

Corner – Scientists understand the way science works but non-scientists do not. Therefore access to scientific data should be limited those who have qualifications in that area. The whole Climategate affair has not adversely affected public opinion about the realities of climate change in any long-term way.

The audience itself was of some interest. Apart from the people I already knew (my husband and some of his colleagues), much of the rest of the audience was made up of RI members (distinguishable by their advancing age and smart suit-style clothing) and journalists, many of whom introduced themselves by asking questions. For an institution supposedly aiming at enlightening the public, there was a surprisingly low number of members of the public present. The RI might want to consider how and to whom it is marketing such events, and what it is really trying to achieve.

My own views have come out of the debate largely unchanged. In my opinion, Pearce and his journalist colleagues do not understand how the peer review system works, and they have not been successful in conveying a true impression of this to the public. I do not find the system to be perfect, however it must be recognised that it is not possible for a single academic, or even group of academics to prevent the publication of every paper with which they disagree. The decision to publish is made by the journal editor based on his/her knowledge of the field and using advice from various referees and, when controversial, probably also from journal board members. Papers that go against popular opinion are more difficult to publish (I disagree with Allen here) but, providing they are justified with a good level of evidence, journal editors will publish these to promote debate and, more cynically, to encourage citations to their journal. The whole Climategate affair was at root a problem of how science and scientific methods, including methods of publishing it, are communicated to the public. In order to engender trust, it is important that the public are given more information, not only (and possibly not even including) raw data, but also more detail about what is being targeted, and how this is being done. Journalists do not appear, in general, to have the time or the will to do this. Pearce spent three months researching his book but has still apparently failed to understand some of the most fundamental principles involved in the quality control of the science; how can we then expect journalists who are writing ten stories a day that may or may not make it out into the world to invest enough time to understand the complexities of each subject? This motivates my personal belief that there is a place for, and indeed a need for, science communication by scientists rather than or in addition to journalism by journalists.

So how did Pearce and his book come out of this debate? Yes, the debate raised the profile of a book I would probably not otherwise have heard of. Yes, I believe that the book probably contains an interesting account of Pearce’s experiences and interviews with the academics involved. No, I did not buy the book and I do not intend to. I would rather give my money to someone who is more open to the idea of improving communication between scientists and the public, than to someone who appears to wish to allow the scientists to be cut out of the debate by taking complex data to people who are not equipped with the understanding to be able to interpret it correctly. Journalists are not experts in anything except journalism, and they need to accept this. Phil Jones’ account of the events would make much more interesting reading, if he ever decides to go public with this.