Category Archives: Research

Multi-district preference modelling

This paper with longtime coauthor Geoffrey Pritchard is an important step toward systematic design of electoral systems.

Abstract: Generating realistic artificial preference distributions is an important part of
any simulation analysis of electoral systems. While this has been discussed in some de-
tail in the context of a single electoral district, many electoral systems of interest are based
on districts. Neither treating preferences between districts as independent nor ignoring
the district structure yields satisfactory results. We present a model based on an extension
of the classic Eggenberger-Pólya urn, in which each district is represented by an urn and
there is correlation between urns. We show in detail that this procedure has a small num-
ber of tunable parameters, is computationally efficient, and produces “realistic-looking”
distributions. We intend to use it in further studies of electoral systems.

New algorithms for matching problems

A preprint with Jacky Lo, just submitted to arXiv and on my publications page. The paper was ultimately inspired by a Christmas party game, which shows that the line between work and play is hard to draw.

Abstract: The standard two-sided and one-sided matching problems, and the closely related school choice problem, have been widely studied from an axiomatic viewpoint. A small number of algorithms dominate the literature. For two-sided matching, the Gale-Shapley algorithm; for one-sided matching, (random) Serial Dictatorship and Probabilistic Serial rule; for school choice, Gale-Shapley and the Boston mechanisms. The main reason for the dominance of these algorithms is their good (worst-case) axiomatic behaviour with respect to notions of efficiency and strategyproofness. However if we shift the focus to fairness, social welfare, tradeoffs between incompatible axioms, and average-case analysis, it is far from clear that these algorithms are optimal. We investigate new algorithms several of which have not appeared (to our knowledge) in the literature before. We give a unified presentation in which algorithms for 2-sided matching yield 1-sided matching algorithms in a systematic way. In addition to axiomatic properties, we investigate agent welfare using both theoretical and computational approaches. We find that some of the new algorithms are worthy of consideration for certain applications. In particular, when considering welfare under truthful preferences, some of the new algorithms outperform the classic ones.

A blockage cleared

After a very long time, some (revised versions of) papers have emerged, two on distance rationalizability of voting rules with Benjamin Hadjibeyli, one on signed networks with Samin Aref, and one on “wisdom of crowds” with Patrick Girard and Valery Pavlov.

There is a long list of things still to do, but this is a big relief. These papers had gone through substantial revisions and years of work in some cases. There is also a recent paper on legislation networks with Neda Sakhaee and Golbon Zakeri.

My papers can always be found on my publications page.

COMSOC 2016

I attended this biennial meeting for the fourth consecutive time. Attendance in Toulouse was substantially larger than previously. Organization was excellently led by Umberto Grandi.

Toulouse was busy with tourists augmented by fans of Euro 2016 football teams and evenings were very noisy downtown. The city seems like a pleasant, slightly provincial place.

There were many interesting talks but I find it hard to maintain concentration. The poster sessions were surprisingly interesting and perhaps in future there should be much more time spent on these, and shorter talks given.

Average-case analysis of random assignment algorithms

With summer scholarship student Jacky Lo, I have just submitted this paper to COMSOC 2016. This is the first time I have seriously looked at resource allocation in social choice. It was
interesting and I may do more on this topic in future.

Abstract: The problem of one-sided matching without money (also known as house allocation), namely computing a bijection from a finite set of items to a finite set of agents each of whom has a strict preference order over the items, has been much studied. Symmetry considerations require the use of randomization, yielding the more general notion of random assignment. The two most commonly studied algorithms (Random Serial Dictatorship (RP) and Probabilistic Serial Rule (PS)) dominate the literature on random assignments.
One feature of our work is the inclusion of several new algorithms for the problem. We adopt an average-case viewpoint: although these algorithms do not have the axiomatic properties of PS and RP, they are computationally efficient and perform well on random data, at least in the case of sincere preferences. We perform a thorough comparison of the algorithms, using several standard probability distributions on ordinal preferences and measures of fairness, efficiency and social welfare.
We find that there are important differences in performance between the known algorithms. In particular, our lesser-known algorithms yield better overall welfare than PS and RP and better efficiency than RP, with small negative consequences for envy, and are computationally efficient. Thus provided that worst-case and strategic concerns are relatively unimportant, the new algorithms should be seriously considered for use in applications.

Predicting the 2015 Canadian election

The Canadian general election will be held on 19 October. The most basic prediction method uses the full district (“riding”) vote information from the last election (in 2011), the current poll estimate for national level support for each party, and a model of changes in district votes. There are two main models used in predicting elections under First Past the Post (single-winner plurality in districts), namely Uniform (additive) Swing and Proportional (multiplicative) Swing.

Based on the aggregate poll at signal.thestar.com, these two models predict the following point estimates for the seat distributions (after scaling up to account for the increase in parliament size since 2011):

Multiplicative: CON 133 NDP 71 LIB 125 BQ 7 GRE 1
Additive: CON 145 NDP 85 LIB 101 BQ 6 GRE 1

NDP have lost a lot of support in recent weeks, but it still looks as though no party will have an absolute majority and CON will be the largest party.

UPDATE 19 October (NZ time): using the latest poll estimate the models now give:

Multiplicative: CON 131 NDP 72 LIB 128 BQ 3 GRE 1
Additive: CON 137 NDP 86 LIB 109 BQ 5 GRE 1

ThreeHundredEight.com predict: CON 120, NDP 71, LIB 141, BQ 5, GRE 1
Toronto Star predict: CONS 124, NDP 71, LIB 140, BQ 2, GRE 1

Let’s see the results tomorrow.

Measures of partial balance in signed networks

Networks in which the edges can be positive or negative occur in many applications (for example, in international relations, states may be allied or official enemies). A widely-used theory of “balance” predicts that networks should become more balanced over time. However there are no clearly agreed measures of partial balance. Samin Aref and I made a start on improving this situation.

Abstract: Is the enemy of an enemy necessarily a friend, or a friend of a friend a
friend? If not, to what extent does this tend to hold? Such questions were
formulated in terms of signed (social) networks and necessary and sufficient
conditions for a network to be “balanced” were obtained around 1960. Since then
the idea that signed networks tend over time to become more balanced has been
widely used in several application areas, such as international relations.
However investigation of this hypothesis has been complicated by the lack of a
standard measure of partial balance, since complete balance is almost never
achieved in practice.
We formalise the concept of a measure of partial balance, compare several
known measures on real-world and synthetic datasets, as well as investigating
their axiomatic properties. We use both well-known datasets from the sociology
literature, such as Read’s New Guinean tribes, and much more recent ones
involving senate bill co-sponsorship. The synthetic data involves both
ErdH{o}s-R’enyi and Barab’asi-Albert graphs.
We find that under all our measures, real-world networks are more balanced
than random networks. We also show that some measures behave better than others
in terms of axioms, computational tractability and ability to differentiate
between graphs. We make some recommendations for measures to be used in future
work.

Link to preprint: http://arxiv.org/abs/1509.04037

Distance-based voting rules

After a long gestation period in which I seemed to be publishing nothing, a few projects have reached maturity. With Benjamin Hadjibeyli, I have a preprint studying so-called distance rationalizable voting rules, which we recently submitted. These are voting rules in which we specify some notion of consensus winner, set up a distance measure on elections, and then choose the winner(s) based on minimizing the distance to a consensus election. This framework has been used by other authors over the years, particularly by Edith Elkind, Piotr Faliszewksi and Arkadii Slinko in a series of papers.

We deal with a lot of foundational issues and give some insight into the way Minkowski geometry relates to decisiveness of such rules. This includes both positive and negative results.

This work was inspired by discussions with Elchanan Mossel and Miklos Racz in Berkeley in 2013, about whether the Dodgson rule is a so-called hyperplane rule. That question is still not answered, but now the underbrush has been cleared, it makes sense to go back to it.

Experimental research

When I was a PhD student, stretching my horizons meant thinking about commutative ring theory, instead of general rings. Over my career I have gradually stretched further, taking in mathematical parts of computer science and social choice theory. However in recent years the stretching has become much larger. In addition to supervising PhD students in network science, my first (joint) work on experimental social science has been uploaded to the world.

In this work, ultimately inspired by a logical model developed by our colleague Patrick Girard and coauthors to describe belief changes in social networks, Valery Pavlov and I have conducted a laboratory experiment with human participants, designed to measure influence and social learning of factual information. A novelty was the way we allowed and incentivized participants to truthfully report “I don’t know” – this seemingly small change has large effects on the dynamics.

Almost everything about this was new to me, but I now feel confident about taking this work much further. Threshold-type diffusion models, as opposed to the infection-type models so common in network science, seem to be much more relevant to this kind of situation. Our work suggests a different model from the usual threshold model.

Who knows what the next decade will bring? Perhaps an art installation or a musical performance? There are still faculties of the university I haven’t been much involved with.