Category Archives: Professional

NZAU Open Research conference

I attended most of all 3 days of the conference. The technology was surprising to me – collaborative online notepads were written on by attendees, and tweets were sent from one room to the other to find out what people were doing. The amount of activity on Twitter was enough that the hashtag #aunzor was hijacked by some unsavoury spammers. Notable speakers were Aidan Byrne, CEO of Australian Research Council (via Google Hangout), and Nat Torkington. Mat Todd from Sydney gave a very interesting talk about open lab book science. I am still thinking how open research methods would apply in mathematics. The overall standard of discussion was high, and in the end resulted in a declaration (soon to be finished) in support of open research. Overall a very well organized and inspiring meeting – congratulations to the organizers. This was my first ever panel appearance at a conference.

Open access in 2013

There has been much news already this year, some of it disturbing. The Andrew Auernheimer case and the Aaron Swartz case (which ended tragically) show that there can be serious consequences to encouraging openness. Luckily, the scholarly community can fix the current problems with access itself, given enough will. Governments seem to be realizing how important the issue is, and the Australian Research Council is the latest funder to enact an OA mandate (albeit a flawed one). It will be interesting to see how long it takes New Zealand to follow suit.

I have been invited to participate in the Open Research conference in Auckland, 6-7 February and am looking forward to it. It’s hard to quantify, but I have a strong feeling that 2013 is the year in which open access is finally regarded as a problem that is essentially solved. We can then turn our attention to the more serious problem of filtering the huge amount of free information: “traditional” peer review is not working well, and this problem will persist independent of access. A radical rethinking of careerism and a reconnection with the true spirit of scholarship is needed: the demand side of publishing must be addressed.

New on arXiv.org is an excellent article by Bjoern Brembs and Marcus Munafo – Deep Impact: Unintended consequences of journal rank.

Quality of open access outlets

The “gold OA” (pay-to-publish) model of scientific publishing has an obvious downside – in a market driven by producers, not consumers, some pretty low quality stuff can be produced. There are many organizations that seemingly exist only to part foolish authors from their money, with very low quality control and a variety of unscrupulous practices. They often solicit submissions by email. For authors, it is essential to consult Beall’s list of predatory open access publishers (and his list of criteria for inclusion in this list) before getting involved with any such outlet.

Reinventing Discovery

I highly recommend the book (published in 2011, but I have only just read it – it’s hard to be on the cutting edge) Reinventing Discovery by Michael Nielsen. He “wrote this book with the goal of lighting an almighty fire under the scientific community”. His overview of Open Science, of which Open Access to publications is just one component, is very compelling and optimistic, without losing sight of difficulties.

Plagiarism: more, or just easier to detect?

In the last year I have seen a flood of stories on plagiarism, and academic misconduct more generally. In the world of journalism, there have been some high-profile cases: Johann Hari, Jonah Lehrer and Fareed Zakaria. In politics, several German politicians have recently been affected, including the Defence Minister.

In Zakaria’s case, it seemes at least plausible that the offence was committed by an assistant, and he signed his name to something without reading carefully. As pointed out by Richard Bradley,

people who seem like they’re doing much more than most of us could do in the same amount of time…probably aren’t really doing it.

There have been a few other recent cases of very “productive” and rather famous academics getting into trouble by overextending themselves, for example  Marc Hauser.  There has even been a case at my own university. Some of these cases involve plagiarism, and others falsifying data. The common thread is an attempt to cut corners and have the rewards without the hard work. Retraction Watch is often a good source of information on such cases.

In the academic sphere there have been some amazing developments in cheating lately: Hyung-In Moon’s attempt at influencing the peer review process is the latest one I know. It seems that the rewards for cheating are overpowering the penalties for being caught. Perhaps we need to work harder on ostracism, and explain to these people that it’s OK not to appear to be superhuman, or in fact better than you really are.

Division of labour in prepublication peer review

It seems to me to be a good idea to separate out the traditional refereeing (pre-publication review) functions. In mathematics at least, a paper should be “true, new and interesting”. It is often easier to check the first rather than the second, especially for less experienced researchers.  It makes sense for more experienced researchers to be asked for a quick opinion on how interesting and new a paper is, while more junior ones check correctness. This has some other advantages: if it becomes widespread, authors will have an incentive to write in a way that is understandable to new PhDs or even PhD students, which will probably improve exposition quality overall. It would reduce the load on senior researchers (I received an email yesterday from a colleague who said he had just received his 40th refereeing request for the year!) Doing a good job as a junior researcher could lead to a good CV item, so there would be an incentive to participate. Some sort of rating of reviewers will probably need to be undertaken: just as with papers that pass “peer review”, postpublication feedback from the whole community will be involved.

Peer review

I intend to present ideas (mostly not my own) about how to improve the current peer review system. This is a background post.

What is the purpose of peer review of scholarly publications?

  • Certification of correctness of the work
  • Filtering out work of low interest to the research community, to allocate attention more efficiently
  • Improving the quality of the work

Michael Eisen (among others) has argued that the current system is broken. Michael Nielsen debunks three  myths about scientific  peer review. Daniel Lemire has several interesting posts, including: the perils of filter-then-publish, peer review is an honor-based system.

Certification is still important, and very discipline-specific. In (parts of?) physics it seems to be a fairly low standard: not obviously wrong. The journal PLoSOne seems to check more rigorously for correctness, but is very relaxed on significance (see here). Mathematics journals I have experience with seem to be more finicky, and traditional journals with a high reputation are much tougher in assessing significance, often rejecting without looking at the technical details.

It seems clear to me that improvements in the current system are sorely needed. Excessive attention to whether work is “interesting” risks reducing science to a popularity contest, and there are too many boring but correct papers to read. Who has time to help others improve their work, if refereeing is anonymous and there is so much pressure to publish yourself?