Author Archives: wilson.mark.c

Submission to the Electoral Commission Review of MMP

I missed the first deadline for proposals for submissions to the review, but now that the Proposals Paper has been released, it has focused attention on a smaller number of issues. With Michael Fowlie (current COMPSCI 380 project student) I have made a submission based on simulations of what we hope are “realistic” elections. We find that the party vote threshold should be lower than the 4% recommended by the commission. I have been told by the EC that our submission will be an appendix to their report due out on 31 October. It will be interesting to see a) their recommendations b) whether they lead to any actual change.

Addendum: our submission appears as Appendix D in the commission’s final report to Parliament. They went with the 4% recommendation in the end.

Open access update

There is a lot of new material out there, and some older stuff I hadn’t yet seen. These may be useful.

Krakow trip

Last week I was in Krakow, Poland for COMSOC 2012. The meeting itself was intense (perhaps too many talks) and useful – I need time to digest it. The day before I walked around the central city as a tourist. My expectations were initially low, then built up by a little reading of Wikipedia. I found the Wawel Cathedral to be a good introduction to Polish history, which seems to have been rather difficult, full of invasion and struggles for nationhood. The Schindler factory museum of wartime Krakow was overwhelming, and emotionally draining, but very worthwhile. I didn’t manage to go / couldn’t face going to Auschwitz / Oswiecim, one of the main tourist attractions nearby. However I now have a better appreciation for the day to day brutality of Nazi occupation and more sympathy for the concept of Israel.

Krakow seems to have a large tourist infrastructure and overall it seemed to be doing well economically. English is widely spoken, which was just as well because my efforts to learn Polish were fairly ineffective. It is much harder than I had guessed. I recommend a visit to the city if you are intending tourism in Central/Eastern Europe.

Plagiarism: more, or just easier to detect?

In the last year I have seen a flood of stories on plagiarism, and academic misconduct more generally. In the world of journalism, there have been some high-profile cases: Johann Hari, Jonah Lehrer and Fareed Zakaria. In politics, several German politicians have recently been affected, including the Defence Minister.

In Zakaria’s case, it seemes at least plausible that the offence was committed by an assistant, and he signed his name to something without reading carefully. As pointed out by Richard Bradley,

people who seem like they’re doing much more than most of us could do in the same amount of time…probably aren’t really doing it.

There have been a few other recent cases of very “productive” and rather famous academics getting into trouble by overextending themselves, for example  Marc Hauser.  There has even been a case at my own university. Some of these cases involve plagiarism, and others falsifying data. The common thread is an attempt to cut corners and have the rewards without the hard work. Retraction Watch is often a good source of information on such cases.

In the academic sphere there have been some amazing developments in cheating lately: Hyung-In Moon’s attempt at influencing the peer review process is the latest one I know. It seems that the rewards for cheating are overpowering the penalties for being caught. Perhaps we need to work harder on ostracism, and explain to these people that it’s OK not to appear to be superhuman, or in fact better than you really are.

Division of labour in prepublication peer review

It seems to me to be a good idea to separate out the traditional refereeing (pre-publication review) functions. In mathematics at least, a paper should be “true, new and interesting”. It is often easier to check the first rather than the second, especially for less experienced researchers.  It makes sense for more experienced researchers to be asked for a quick opinion on how interesting and new a paper is, while more junior ones check correctness. This has some other advantages: if it becomes widespread, authors will have an incentive to write in a way that is understandable to new PhDs or even PhD students, which will probably improve exposition quality overall. It would reduce the load on senior researchers (I received an email yesterday from a colleague who said he had just received his 40th refereeing request for the year!) Doing a good job as a junior researcher could lead to a good CV item, so there would be an incentive to participate. Some sort of rating of reviewers will probably need to be undertaken: just as with papers that pass “peer review”, postpublication feedback from the whole community will be involved.

Binary search rules!

This afternoon I visited a branch library with my sons and checked out a large number of books for them, using the self-checkout. On leaving, we set off the alarm, because at least one book had not been correctly scanned. A librarian seized the pile of books, and proceeded to determine the offending book by binary search using the alarm (his algorithm may not have worked completely if more than one book was unscanned, so he performed a final check on the original pile, after scanning the one he found). His colleague was amazed and said that everyone else uses (in effect) sequential search of the receipt, checking it with the books in hand. I was amazed that someone this clever was working as a librarian, and that I had finally found a real-life application of an algorithm – I try to give many such examples when teaching algorithms courses, but they always have a slightly contrived feel to them.

Peer review

I intend to present ideas (mostly not my own) about how to improve the current peer review system. This is a background post.

What is the purpose of peer review of scholarly publications?

  • Certification of correctness of the work
  • Filtering out work of low interest to the research community, to allocate attention more efficiently
  • Improving the quality of the work

Michael Eisen (among others) has argued that the current system is broken. Michael Nielsen debunks three  myths about scientific  peer review. Daniel Lemire has several interesting posts, including: the perils of filter-then-publish, peer review is an honor-based system.

Certification is still important, and very discipline-specific. In (parts of?) physics it seems to be a fairly low standard: not obviously wrong. The journal PLoSOne seems to check more rigorously for correctness, but is very relaxed on significance (see here). Mathematics journals I have experience with seem to be more finicky, and traditional journals with a high reputation are much tougher in assessing significance, often rejecting without looking at the technical details.

It seems clear to me that improvements in the current system are sorely needed. Excessive attention to whether work is “interesting” risks reducing science to a popularity contest, and there are too many boring but correct papers to read. Who has time to help others improve their work, if refereeing is anonymous and there is so much pressure to publish yourself?

 

Better citation indices

Daniel Lemire and colleagues are aiming to find a better algorithm to measure importance of research articles by incorporating the context in which the citation is made (for example, distinguishing between “courtesy citations” inserted to placate referees and real pointers to important work). They need some data and it looks like a low burden for each researcher to provide it. Check out this site  for more.

I think we have passed the point of no return with bibliometrics in evaluating researchers and articles. They will be used, so it is to our benefit to ensure that less bad ones are used.