Category Archives: Social Choice

A new agent symmetry criterion for social choice

It is time to write something longer than a tweet on this research topic, which in my opinion, trying to be as objective as possible, is very promising and conceptually important. I have worked on this with collaborators Jacky Lo, Geoff Pritchard and especially Rupert Freeman.

Let’s start with the model where we first noticed this new concept, namely one-to-one matching under ordinal preferences (“housing allocation”). Each of N agents has strict preferences over a set of N objects and we aim to find an assignment, namely a bijection between agents and objects. The famous mechanism called Serial Dictatorship fixes an order on the agents, and lets them choose in turn their most preferred object from those remaining. On the other hand, Top Trading Cycles starts with a fixed allocation, and gets each agent to point to the most preferred object. There must be at least one cycle, and we redistribute objects along the cycle so everyone in the cycle is satisfied. After removing these agents and objects, we continue with 2nd preferences, etc, until we reach a final assignment. A famous result of Abdulkadiroglu and Sonmez says that if the choosing order for SD is taken uniformly at random (this is called RSD), as is the initial allocation for TTC (which we call TTC-RE), then the probability distribution over assignments for these different-looking mechanisms is the same.

However, imagine yourself as an agent when RSD is used. No matter how you feel before the choosing order is determined, if you turn out to be at the beginning, you know you will always get your first choice object. If you turn out to be at the end, you know that you will have to take whatever is left. There is clear bias toward the front of the choosing order. On the other hand, it seems clear that no agent in TTC-RE will be as systematically advantaged just by virtue of their position in the mechanism.

A standard social choice concept is anonymity, meaning that a mechanism treats agents symmetrically, and the outcomes depend only on the profile of agent preferences, not the identities of the agents. However full anonymity is not possible in this model. RSD is anonymous, but once the choosing order is determined, the resulting SD is not anonymous. Similarly TTC-RE is anonymous but TTC is not. Yet TTC seems more symmetric than SD.  It seems there is an elusive criterion involving agent symmetry. The randomization discussed above turns out to be a red herring – we can define it in the deterministic case directly.

Our idea is to define such a criterion using average-case analysis. For a fixed probability distribution P on profiles (e.g. the uniform distribution), we consider the expected rank distribution for each agent, and if this is the same for all agents we call the mechanism order symmetric with respect to P. Some restriction on P is needed, because a point mass concentrated on a unanimous profile will prevent all mechanisms from being order symmetric. We show that provided P has sufficient symmetry between agents and between objects, TTC is order symmetric but SD is very far from being so. In other words, no agent is disadvantaged once the randomness in the mechanism is realized, and before preferences are elicited.

Not only does this definition clarify the difference between SD and TTC, it suggests tweaks to existing mechanisms. Full order symmetry may not be achieved in some situations, but we can try to equalize the bias between agents. The Boston mechanism clearly treats those early  in the tiebreaking order much better than those later in the order. A simple idea is to reverse the tiebreaking order after the first round. Simulations show this reduces bias substantially.

I feel that this idea has a bright future, with possible extensions to other social choice models and to practical mechanism design. Now we need a journal or conference to see that – we have had surprising difficulty in getting published. The two types of randomness can be confusing, and many researchers seem to think about fairness in terms of outcomes rather  than a procedural criterion of this sort. Some may just not think it is important, but we should first ensure there is no misunderstanding.

Average-case analysis of random assignment algorithms

With summer scholarship student Jacky Lo, I have just submitted this paper to COMSOC 2016. This is the first time I have seriously looked at resource allocation in social choice. It was
interesting and I may do more on this topic in future.

Abstract: The problem of one-sided matching without money (also known as house allocation), namely computing a bijection from a finite set of items to a finite set of agents each of whom has a strict preference order over the items, has been much studied. Symmetry considerations require the use of randomization, yielding the more general notion of random assignment. The two most commonly studied algorithms (Random Serial Dictatorship (RP) and Probabilistic Serial Rule (PS)) dominate the literature on random assignments.
One feature of our work is the inclusion of several new algorithms for the problem. We adopt an average-case viewpoint: although these algorithms do not have the axiomatic properties of PS and RP, they are computationally efficient and perform well on random data, at least in the case of sincere preferences. We perform a thorough comparison of the algorithms, using several standard probability distributions on ordinal preferences and measures of fairness, efficiency and social welfare.
We find that there are important differences in performance between the known algorithms. In particular, our lesser-known algorithms yield better overall welfare than PS and RP and better efficiency than RP, with small negative consequences for envy, and are computationally efficient. Thus provided that worst-case and strategic concerns are relatively unimportant, the new algorithms should be seriously considered for use in applications.

Paradoxes of runoff voting

The New Zealand Labour party will soon have an election for leader of its Parliamentary caucus. The voting system is a weighted form of instant runoff using the single seat version of Hare’s method (instant runoff/IRV/alternative vote). IRV works as follows. Each voter submits a full preference order of the candidates (I am not sure what happens if a voter doesn’t rank all candidates but presumably the method can still work). In each round, the voter with smallest number of first preferences (the plurality loser) is eliminated, and the candidate removed from the preference orders, keeping the order of the other candidates the same. If there is a tie for the plurality loser in a round, this must be broken somehow.

The NZLP variant differs from the above only in that not all voters have the same weight. In fact, the caucus (34 members) has a total weight of 40%, the party members (tens of thousands, presumably) have total weight 40%, and the 6 affiliated trade unions have total weight 20%, the weight being proportional to their size. It is not completely clear to me how the unions vote, but it seems that most of them will give all their weight to a single preference order, decided by union leaders with some level of consultation with members. Thus in effect there are 34 voters each with weight 20/17, 6 with total weight 20, and the rest of the weight (total 40) is distributed equally among tens of thousands of voters. Note that the total weight of the unions is half the total weight of the caucus, which equals the total weight of the individual members.

IRV is known to be susceptible to several paradoxes. Of course essentially all voting rules are, but the particular ones for IRV include the participation paradoxes which have always seemed to me to be particularly bad. It is possible, for example, for a candidate to win when some of his supporters fail to vote, but lose when they come out to vote for him, without any change in other voters’ behaviour (Positive Participation Paradox). This can’t happen with three candidates, which is the situation we are interested in (we denote the candidates C, J, R). But the Negative Participation Paradox can occur: a losing candidate becomes a winner when new voters ranking him last turn out to vote.

The particular election is interesting because there is no clear front-runner and the three groups of voters apparently have quite different opinions. Recent polling suggests that the unions mostly will vote CJR. In the caucus, more than half have R as first choice, and many apparently have C as last. Less information is available about the party members but it seems likely that C has most first preferences, followed by J and R.

The following scenario on preference orders is consistent with this data: RCJ 25%, RJC 7%, CRJ 10%, CJR 30%, JRC 20%, JCR 8%. In this case, J is eliminated in the first round and R wins over C in the final round by 52% to 48%. Suppose now that instead of abstaining, enough previously unmotivated voters decide to vote JRC (perhaps because of positive media coverage for J and a deep dislike of C). Here “enough” means “more than 4% of the total turnout before they changed their minds, but not more than 30%”. Then R is eliminated in the first round, and C wins easily over J. So by trying to support J and signal displeasure with C, these extra voters help to achieve a worse outcome than if they had stayed at home.

The result of the election will be announced within a week, and I may perhaps say more then.

Submission to the Electoral Commission Review of MMP

I missed the first deadline for proposals for submissions to the review, but now that the Proposals Paper has been released, it has focused attention on a smaller number of issues. With Michael Fowlie (current COMPSCI 380 project student) I have made a submission based on simulations of what we hope are “realistic” elections. We find that the party vote threshold should be lower than the 4% recommended by the commission. I have been told by the EC that our submission will be an appendix to their report due out on 31 October. It will be interesting to see a) their recommendations b) whether they lead to any actual change.

Addendum: our submission appears as Appendix D in the commission’s final report to Parliament. They went with the 4% recommendation in the end.

2011 referendum simulator: experience so far

Several months ago I realized that the 2011 referendum in NZ on the voting system for parliamentary elections was coming soon. Geoff Pritchard and I developed a simulator with the aim of enabling voters to understand the consequences of a change to another system. In order to do this in a way that is useful to the non-expert, some simplifying assumptions must be made. We had substantial media coverage and some criticism.

Initial surprises:

  • How few people bothered to read the detailed FAQ before criticizing.
  • How many people thought that the simulator was trying to “back-cast” historical elections, and were certain that our results were unrealistic, without giving any evidence.
  • How much the criticisms, even unfounded ones, helped to clarify my understanding of what we had actually done, and suggested further research.
  • How short the attention span of internet visitors is.

In particular I want to respond to comments by David Farrar on his well-known site Kiwiblog. The relevant extract:

Now in 2002 National actually won 21 electorate seats out of 69 or 70. So this model is saying if there were 50 extra electorate seats, National would win 11 fewer seats!!

Why? Because they have come up with a formula based on the last 50 years or so of FPP elections, which they applied to the party vote figures for 2002. They ignored the actual electorate vote. It is a classic academic approach.

The more pragmatic approach, which is what others have done, is to say well if National won 21 electorate seats in 2002 out of 70, then if there 120 seats, their estimated number of seats would be 21*120/70, which is 36 seats.

In fact we did not look at any of the historical FPP elections The “formula” is based completely on the MMP party vote from the 2008 election (so yes, we did ignore the electorate vote, for what we think are good reasons).

However this got me thinking about how we might try to validate our assumptions. One way which I don’t (yet) claim is rigorous, but makes at least as much sense as the above, is to apply the simulator (the FPP part) to the historical FPP elections, and scale the 120 seats down to whatever it was historically (80 for many years, then increasing over time). The results surprised me greatly, as they are much better than expected, and this cries out for explanation (“further research”, always good for academics). Here they are. Note that these simulator results explicitly do not use any historical data, seat boundaries and parties have changed, etc.

1969: Real result was Nat 45 Lab 39; simulator scaled was Nat 46.9, Lab 37.1
1972: Real was Nat 55 Lab 32; simulator scaled was Nat 54.4, Lab 31.6
1975: Real was Nat 55 Lab 32; simulator scaled was Nat 55.1, Lab 31.9
1978: Real was Nat 51  Lab 40 SoCred 1; simulator scaled was Nat  47.5, Lab 44.5.
1981: Real was Nat 47 Lab 43 SoCred 2; simulator scaled was Nat 48.3, Lab 43.7
1984: Real was Nat 37 Lab 56 SoCred 2; simulator scaled was Nat 37 Lab 58.
1987: Real was Lab 57, Nat 40; simulator scaled was Lab 50.2, Nat 46.8
1990: Real was  Nat 67, Lab 29, NewLab 1; simulator scaled was Nat 71, Lab 26
1993: Real was Nat 50, Lab 45,  NZF 2, Alliance 2; simulator scaled was Nat 53.6, Lab 45.4

The Complexity of Safe Manipulation under Scoring Rules

This paper will appear in Proceedings of IJCAI ’11 and be presented at the conference in Barcelona in July.

Abstract: Slinko and White have recently introduced a new model of coalitional
manipulation of voting rules under limited communication, which they
call {em safe strategic voting}. The computational aspects of this
model were first studied by Hazon and Elkind, who provide
polynomial-time algorithms for finding a safe strategic vote under
$k$-approval and the Bucklin rule. In this paper, we answer an open
question of Hazon and Elkind by presenting a polynomial-time algorithm
for finding a safe strategic vote under the Borda rule. Our results for
Borda generalize to several interesting classes of scoring rules.

The probability of safe manipulation

This paper (joint with Reyhaneh Reyhani) will be presented at COMSOC 2010. It deals with the concept of “safe manipulation”, whereby an agent can attempt to manipulate the result of an election without risk, provided that voters of other preferences vote sincerely but irrespective of the number of like-minded voters that also try to vote strategically. It was introduced by Slinko and White in COMSOC 2008.

Abstract: The concept of safe manipulation has recently been introduced by Slinko and White. We show how to compute the asymptotic probability that a safe manipulation exists for a given scoring rule under the uniform distribution on voting situations (the so- called Impartial Anonymous Culture). The technique used is computation of volumes of convex polytopes. We present explicit numerical results in the 3-candidate case.

Download author copy

AAGTSC 2010

The newly formed Centre for Mathematical Social Sciences at the University of Auckland (of which I am a founding member) recently held its first event, a workshop on Algorithmic Aspects of Game Theory and Social Choice. There were some interesting talks and some good colleagues from overseas attended. I enjoyed it very much. Thanks to all the participants especially those coming from afar.

New measures of the difficulty of manipulation of voting rules

Reyhaneh Reyhani, Geoffrey Pritchard and I have just submitted this to a special issue of Journal of Autonomous Agents and Multi-Agent Systems concerning computational social choice.

Download author version

Abstract: We introduce new measures of manipulability of anonymous voting rules
and argue for their superiority over some commonly used measures. We give a sim-
ple common framework that describes these measures and connects them to recent
literature. We discuss their computation and present numerical results that allow for
comparison of various common voting rules.