Author Archives: wilson.mark.c

The Complexity of Safe Manipulation under Scoring Rules

This paper will appear in Proceedings of IJCAI ’11 and be presented at the conference in Barcelona in July.

Abstract: Slinko and White have recently introduced a new model of coalitional
manipulation of voting rules under limited communication, which they
call {em safe strategic voting}. The computational aspects of this
model were first studied by Hazon and Elkind, who provide
polynomial-time algorithms for finding a safe strategic vote under
$k$-approval and the Bucklin rule. In this paper, we answer an open
question of Hazon and Elkind by presenting a polynomial-time algorithm
for finding a safe strategic vote under the Borda rule. Our results for
Borda generalize to several interesting classes of scoring rules.

The lecturer

This may not be the final form, but I will post it now so as not to forget.

To the tune of “Killing me softly”

I heard he gave a good talk
I heard he had some style
And so I went to see him
To listen for a while

And there he was this (youngish) man
No stranger to my eyes

Numbing my brain with his figures
Straining my eyes with his slides
Driving me crazy with errors
Killing me slowly, with his talk, …

I couldn’t reach the aisle
Surrounded by the crowd
Seemed like he’d brought his paper
And read each word out loud

I prayed that he would finish
But he just kept right on

[chorus]

I tried to get attention
I tried an icy stare
But he just looked right through me
As if I wasn’t there

And he just kept on talking
Barely audibly

[chorus]

Asymptotics of coefficients of multivariate generating functions: improvements for multiple points

This has been uploaded to arXiv and submitted to Online Journal of Combinatorics. Interestingly, it was rejected by Electronic Journal of Combinatorics without refereeing, as being out of scope. It improves on previous work mainly by giving explicit formulae. The main work associated with this paper is the SAGE implementation by Alex Raichev. Computation of higher order asymptotic expansions by hand is essentially impossible, hence the paucity of numerical or explicit results  in published papers.

Download author copy

Wikileaks

There is too much to say, but no time to do it. Wikileaks, whatever the motivation and personal foibles of the people involved, has done the world a service and committed no crime of which I am aware. The reaction of politicians especially in the USA has been very revealing, and the withdrawal of support by companies such as Visa, Paypal, Mastercard and Amazon, presumably under political pressure, has been extremely disappointing. Too few people seem to be drawing the correct conclusion: if you don’t want people to find out that you have done bad things, don’t do them! I am still trying to work out how best to involve myself in this issue. For now, here are some links:

WLCentral information clearinghouse

List of Wikileaks mirror sites

Background piece on Assange and Wikileaks  by Raffi Khatchadourian

Julian Assange has made us all safer by Johann Hari

Blog of Glenn Greenwald at Salon.com

The three faces of Uncle Sam by Michael Brenner

The decline and fall of the American empire by Alfred W. McCoy

State Department Humour (if only it wasn’t unintentional)

University rankings

For lack of a suitable alternative venue, I am putting this opinion piece, destined for a University of Auckland audience, here. Some interesting references related to this topic:
http://en.wikipedia.org/wiki/College_and_university_rankings
http://www.insidehighered.com/blogs/globalhighered/

Recently a vast increase in attempts to rank universities worldwide according to various criteria has provoked debate. Despite their deficiencies, they seem unlikely to disappear, given their usefulness for outsiders (such as international students) in making basic quality judgments (although some universities have apparently boycotted some rankings). Some of the many issues involved are: what are they measuring? how accurate and unbiased is the data? how much is factual and how much opinion? how is the data aggregated? can the rankings be manipulated? I want to focus here (given space limitations) on a few of these, and exhort us all to apply our scholarly expertise to help improve the calculation and interpretation of such rankings.

The most famous rankings are probably those conducted by: Times Higher (from 2010 with Thomson Reuters using different methodology, formerly with Quacquarelli Symonds); QS (continuing from 2010 with the same methodology); Shanghai Jiao Tong. The first two aim to measure aspects other than research. Other, research-only, rankings seen as reasonably credible are conducted by Taiwan’s HEEACT, University of Leiden, and very recently a group from University of Western Australia. Most rank at the level of university, and also (for research) at the faculty/discipline level.  To give an idea of the variability, in 2010 under QS ranks UoA 68th; THE 145th; HEEACT 287th; Leiden 272nd; UWA 172nd.

The methodology of these rankings is similar. Several numerical measures are computed (either objectively or from opinion surveys), normalized in some way, and aggregated according to various weightings, in order to arrive at a single number which can be used for ranking. These final numbers are affected by quality of the data, institutional game-playing, and the chosen weights, in addition to the actual metrics used.

Many different objective metrics are used, many of which can be manipulated by institutions and have been, often notoriously, in recent years. They all measure (sometimes subtly) different things: for example, citation data can measure overall citation impact (favouring large institutions), impact per staff member, relative impact (field-dependent), and/or use statistics such as $h$-index, $g$-index, etc. The citation measures used by various rankings have ranked UoA between 99th and around 300th in 2010. Measurement of teaching performance by any objective criteria  is considered extremely difficult, and very crude measures (such as student-staff ratios) are used.

Subjective measures have their own problems. THE and QS use opinion surveys to measure reputation. The alleged positive aspect of this is that a university’s reputation is difficult for the institution itself (or a competitor) to  change by strategic action (manipulating the ranking). The negative aspect is that the institution’s real improvements may be very slow to be reflected in such surveys, and reputation may depend on factors other than real quality. THE gives 19.5% weight to research reputation and 15% to teaching, while QS gives 40% to overall reputation and 10% to reputation among employers. My personal experience as one of the nearly 14000 academics surveyed by THE was that I had very little confidence in the accuracy of my opinions on teaching in North American universities.

The importance of the weights is shown by the 2010 THE performance for UoA. In the 5 categories listed, with respective weights 30,2.5,30,32.5,5 and respective normalized scores 34.8, 94.3, 39.2, 71.8, 61.1, UoA achieved an overall score of 56.1. Different choices of the weights could lead to any score between 34.8 and 94.3, with consequent rankings changing from well below the top 200, to the top 20.

It is clear that substantial interpretation of the rankings is required in order to make any use of them, and the media focus on simplistic analysis using a single final rank  is not helpful to universities or to any of their stakeholders. We should, as a university and as individual researchers:

* Ensure that we understand that different rankings measure different things, and aggregation of rankings is highly problematic, and communicate this to media and stakeholders accurately.
* Be honest. Some of these rankings have been singled out for comment by the Vice-Chancellor and Deputy VC (Research) while others have been publicly ignored. We should (at least internally) look at unfavourable ones too. UoA has slowly declined in most rankings over the last few years.  Field-specific information reveals that some of our faculties score much higher than others. Let’s use the rankings  not only for advertising, but for reflection on our own performance, especially as regards research.
* Get involved to ensure that the ranking methodology is as accurate as possible. Social scientists have a role to play here, in elucidating just what each measure is supposed to measure, what are its axiomatic properties, and how the measures should be aggregated. THE and QS claim to have discussed methodology with hundreds of people. I doubt it included anyone from UoA.
* Demand transparency of methodology and timely provision of unaggregated data to the public, to enable analysis and reproduce results.  I have not discussed the Shanghai rankings in detail because their results are apparently unreproducible.
* Demand that ranking organizations justify the costs to the university of data provision. As with many journals, universities supply the data for free and private companies then control it and sell it back to us. Why should we tolerate it?

Knowing your own place in the world (of science)

I just found some very interesting graphics that show how relatively (in)active various areas of science are, in terms of citations and volume of publication. I always knew there were a lot of biologists and medical researchers, but just how many was surprising.Now I have a better understanding of how universities work.

http://well-formed.eigenfactor.org/treemap.html

http://well-formed.eigenfactor.org/radial.html

Competitive NZ?

Shaun Hendy’s blog has some more comments on the theme of science, technology and innovation, prompted by our slide down the World Economic Forum rankings. His idea that scale is the main problem seems consistent with what I see in Europe – it is so easy to confer with the right people here (low spatial transaction costs!) Government policies have a huge role to play. The comparison made in the comments with Singapore is interesting, although it is very easy to run a country when you don’t have to be democratic about it. I have been talking to yet another successful Israeli – the amount of money and hard work put into science and technology there is staggering.

I think it is likely that the real problem is just culture. We don’t value investment in these things enough, owing to ignorance and different values to some extent (sport and leisure more important than intellectual pursuits). I hope things change soon – I don’t really want to leave, but the pull of the rest of the world is growing for people like me.