Category Archives: 2015-2019

Research news

A few papers are working their way through the journal system:

Once all these are finally done, I can get back to some work on generating functions, after several years, which I am very much looking forward to.

 

Multi-district preference modelling

This paper with longtime coauthor Geoffrey Pritchard is an important step toward systematic design of electoral systems.

Abstract: Generating realistic artificial preference distributions is an important part of
any simulation analysis of electoral systems. While this has been discussed in some de-
tail in the context of a single electoral district, many electoral systems of interest are based
on districts. Neither treating preferences between districts as independent nor ignoring
the district structure yields satisfactory results. We present a model based on an extension
of the classic Eggenberger-Pólya urn, in which each district is represented by an urn and
there is correlation between urns. We show in detail that this procedure has a small num-
ber of tunable parameters, is computationally efficient, and produces “realistic-looking”
distributions. We intend to use it in further studies of electoral systems.

New algorithms for matching problems

A preprint with Jacky Lo, just submitted to arXiv and on my publications page. The paper was ultimately inspired by a Christmas party game, which shows that the line between work and play is hard to draw.

Abstract: The standard two-sided and one-sided matching problems, and the closely related school choice problem, have been widely studied from an axiomatic viewpoint. A small number of algorithms dominate the literature. For two-sided matching, the Gale-Shapley algorithm; for one-sided matching, (random) Serial Dictatorship and Probabilistic Serial rule; for school choice, Gale-Shapley and the Boston mechanisms. The main reason for the dominance of these algorithms is their good (worst-case) axiomatic behaviour with respect to notions of efficiency and strategyproofness. However if we shift the focus to fairness, social welfare, tradeoffs between incompatible axioms, and average-case analysis, it is far from clear that these algorithms are optimal. We investigate new algorithms several of which have not appeared (to our knowledge) in the literature before. We give a unified presentation in which algorithms for 2-sided matching yield 1-sided matching algorithms in a systematic way. In addition to axiomatic properties, we investigate agent welfare using both theoretical and computational approaches. We find that some of the new algorithms are worthy of consideration for certain applications. In particular, when considering welfare under truthful preferences, some of the new algorithms outperform the classic ones.

A blockage cleared

After a very long time, some (revised versions of) papers have emerged, two on distance rationalizability of voting rules with Benjamin Hadjibeyli, one on signed networks with Samin Aref, and one on “wisdom of crowds” with Patrick Girard and Valery Pavlov.

There is a long list of things still to do, but this is a big relief. These papers had gone through substantial revisions and years of work in some cases. There is also a recent paper on legislation networks with Neda Sakhaee and Golbon Zakeri.

My papers can always be found on my publications page.

Average-case analysis of random assignment algorithms

With summer scholarship student Jacky Lo, I have just submitted this paper to COMSOC 2016. This is the first time I have seriously looked at resource allocation in social choice. It was
interesting and I may do more on this topic in future.

Abstract: The problem of one-sided matching without money (also known as house allocation), namely computing a bijection from a finite set of items to a finite set of agents each of whom has a strict preference order over the items, has been much studied. Symmetry considerations require the use of randomization, yielding the more general notion of random assignment. The two most commonly studied algorithms (Random Serial Dictatorship (RP) and Probabilistic Serial Rule (PS)) dominate the literature on random assignments.
One feature of our work is the inclusion of several new algorithms for the problem. We adopt an average-case viewpoint: although these algorithms do not have the axiomatic properties of PS and RP, they are computationally efficient and perform well on random data, at least in the case of sincere preferences. We perform a thorough comparison of the algorithms, using several standard probability distributions on ordinal preferences and measures of fairness, efficiency and social welfare.
We find that there are important differences in performance between the known algorithms. In particular, our lesser-known algorithms yield better overall welfare than PS and RP and better efficiency than RP, with small negative consequences for envy, and are computationally efficient. Thus provided that worst-case and strategic concerns are relatively unimportant, the new algorithms should be seriously considered for use in applications.

Measures of partial balance in signed networks

Networks in which the edges can be positive or negative occur in many applications (for example, in international relations, states may be allied or official enemies). A widely-used theory of “balance” predicts that networks should become more balanced over time. However there are no clearly agreed measures of partial balance. Samin Aref and I made a start on improving this situation.

Abstract: Is the enemy of an enemy necessarily a friend, or a friend of a friend a
friend? If not, to what extent does this tend to hold? Such questions were
formulated in terms of signed (social) networks and necessary and sufficient
conditions for a network to be “balanced” were obtained around 1960. Since then
the idea that signed networks tend over time to become more balanced has been
widely used in several application areas, such as international relations.
However investigation of this hypothesis has been complicated by the lack of a
standard measure of partial balance, since complete balance is almost never
achieved in practice.
We formalise the concept of a measure of partial balance, compare several
known measures on real-world and synthetic datasets, as well as investigating
their axiomatic properties. We use both well-known datasets from the sociology
literature, such as Read’s New Guinean tribes, and much more recent ones
involving senate bill co-sponsorship. The synthetic data involves both
ErdH{o}s-R’enyi and Barab’asi-Albert graphs.
We find that under all our measures, real-world networks are more balanced
than random networks. We also show that some measures behave better than others
in terms of axioms, computational tractability and ability to differentiate
between graphs. We make some recommendations for measures to be used in future
work.

Link to preprint: http://arxiv.org/abs/1509.04037

Distance-based voting rules

After a long gestation period in which I seemed to be publishing nothing, a few projects have reached maturity. With Benjamin Hadjibeyli, I have a preprint studying so-called distance rationalizable voting rules, which we recently submitted. These are voting rules in which we specify some notion of consensus winner, set up a distance measure on elections, and then choose the winner(s) based on minimizing the distance to a consensus election. This framework has been used by other authors over the years, particularly by Edith Elkind, Piotr Faliszewksi and Arkadii Slinko in a series of papers.

We deal with a lot of foundational issues and give some insight into the way Minkowski geometry relates to decisiveness of such rules. This includes both positive and negative results.

This work was inspired by discussions with Elchanan Mossel and Miklos Racz in Berkeley in 2013, about whether the Dodgson rule is a so-called hyperplane rule. That question is still not answered, but now the underbrush has been cleared, it makes sense to go back to it.