Thursday, November 13, 2014

one experiment "too much"

In the process of writing up a paper I often come to the point where my data is not sufficient to proof or disproof the theory I have in mind. There are two options what to do. Either speculate on the basis of the data at hand and maybe the concepts available in the literature. Or going back to the lab and do an experiment that clarifies the topic. The last is certainly the better option from a scientific point of view. However, there is the risk that the experiment does not show what I expect. This might mean that I have to adjust my theory a bit and then the paper is good to go. But it can as well mean that my work turns out to have a bigger flaw and I have to chuck the paper, chuck the data and probably several weeks or months of work. From a scientific point of view it is certainly better to find out if the approach/methodology/sample is flawed before anything goes into publication. From the publish or perish point of view it can break your neck! Especially for early career researchers who don't have ten students working on different topics, it is crucial that the publication stream is steady and continuously growing. So from the science perspective there can't be enough experiments done. From the surviving-in-academia perspective it often seems to be necessary to publish first and then maybe do the additional experiment. Or to just move on to the next topic.
I had quite a few good discussions about the view that one can't publish negative data. Even though negative data would certainly help others to avoid mistakes and safe them a lot of time, money and nerves. But science is all about breakthroughs and world changing discoveries. At least it has to sound like that. In reality the breakthroughs are rare and mostly scientific progress is done in small steps. Digging into a topic until you own it takes time and mental space. The pressure to publish both high quality and a lot is counterproductive and certainly leads often enough to going for the speculative paper instead of risking to have to chuck the work.
This is a very big flaw in the scientific system: publish positive results or perish!

Saturday, November 1, 2014

patience, patience, patience, breath!

A common post-doc contract duration in my discipline is somewhere between one and three years.
To develop a solid project that will make an awesome application for independent research group funding or that can convince a search committee giving away an academic position takes about 4-6 months.
Writing up said project idea into the awesome application - or better applications, considering the success rates these days - and handing it in takes 2-6 months depending on scheme.
Waiting for the outcomes takes a least 6 months, sometimes even a year or longer!
Academic folks, who wonder why so many post-docs drop out and look for other ways to earn their money: do the math! It is not necessarily that they are not good enough, not persistent enough, not resilient enough. Often it is just that the common post-doc contract duration is not long enough to wait for the outcomes of another proposal.
Why am I pointing this out? The waiting time for one of my proposals was just extended by four months! This does not sound much for someone on a continuous contract. But for someone who has timed all proposal such that the outcomes should be there at least half a year before the contract runs out, plus four months brings me close to gnawing on my fingernails!