Category Archives: Opinion

Prometheus

I don’t claim to resemble him in every way, but his unfortunate fate came to mind in the last few months as I suffered from recurrent corneal erosion. A short description: imagine that intermittently 10-20% of the skin of your leg is torn off, and grows back within a few days. Now imagine that instead of the leg, it is in fact your cornea, it only happens at night, and it is your own eyelid that is sticking to the cornea and tearing off the top layer. This is certainly the most annoying, painful and debilitating condition I have come across that is essentially trivial in that it won’t really cause any permanent damage. Doing anything with the eye for hours or days after an attack is very difficult, and for me going an hour without reading is very unusual, let alone several days.

Apparently there is a good chance of simple treatment succeeding, and more complicated options exist.

Trip to Iran

I have just returned home after 5 weeks in Iran, mostly for a family vacation. There were many interesting experiences, a lot of them positive. Skiing at about the height of Mt Cook just a few minutes from the city was a lot of fun, as was a bus trip to the ancient city of Kashan, which has had inhabitants nearby for the last 8000 years. Meeting relatives (by marriage) and some old and new friends, buying bread from the neighbourhood bakery, and eating the very high quality and cheap pastries were other highlights.

Of course the political and economic situation is very bad and getting worse. Taxi drivers (of whom we used a very large number – no way was I going to try driving for reasons discussed below, and buses are not that convenient) almost uniformly blamed the government and had no faith in the idea of an Islamic Republic. The rial depreciated sharply against the US dollar and other currencies during our trip. The government appears to be using the strategy of internet “filtering” (censorship) to avoid discussion of anything important, although as usual they were not very competent: the NZ Herald newspaper website was unavailable, although the NY Times was easily accessible. Most internet-savvy people I met were well aware of how to bypass the filters. The drawback is that speeds are reduced to the point where some services don’t really work (I couldn’t see anything on Youtube).

Iran seems to be the opposite of Sweden in the area of design. Doorhandles are loose and when tightened, the locks no longer work. Light switches are routinely dangling from the wall with wiring exposed. Quality control in building seems to be an exception.

Tehran seems to be one big health and safety violation. Some of the highlights I noticed: most taxi drivers have disabled the seatbelts in the back, and seem bemused or offended if this is pointed out (my favourite was the guy who said he would drive carefully and slowly on our trip home, although most of the trip was on the motorway); exposed wiring is visible in many public places; construction workers using welding torches with no eye protection. The most obvious health hazard is the foul stinking air which at this time of year is at its worst.

Many of the health hazards are traffic related (my guess is that the major contribution to the air pollution comes from vehicles, many of which seem to be of low quality and run on low grade fuel). The lane markings are completely ignored, and traffic proceeds (slowly in most places) by a process of everyone trying to claim the right of way and then negotiating silently with the other drivers, who by now are within a few centimetres of the car, approaching at an arbitrary angle. Apparently Iran has 4 times the death rate from traffic accidents that NZ has. We only saw a couple of accidents while there. The low speeds in Tehran must make fatal accidents less common, so I wonder how bad it must be in other places.

This leads to the idea of the rule of law, and the difference between writing laws and enforcing them. Particularly local government rules on footpaths, essentially none of which are wheelchair accessible in the hilly areas because they have steps every few metres (presumably each household deals with its own vehicle crossing and no one checks that they mesh together into a usable path).

It is hard to escape the conclusion that a country with a lot of natural resources, great potential for tourism, a strong and enduring culture with (as yet) strong family units, is being let down by an appallingly bad governance system. I don’t subscribe to the view that just because they have never had a decent government in 2500 years, they can’t have one now, but it does seem that a culture change is needed. Judging from some academic and industry contacts we made, it seems that this change is under way a lot faster than we expected in some sectors, but it seems it will still take considerable time.

Big O(micron) and Big Omega and Big Theta

Teaching an algorithms course recently, I realized that the correct use of these asymptotic notations is apparently rather difficult for students to grasp. In fact, I have some colleagues whose understanding of these concepts is not maximally clear.

In 1976 Donald Knuth published an article explaining these notations in detail and recommending their use. After 35 years and plenty of evidence that it works well, we still have resistance. Perhaps one reason is that the idea of “asymptotically equivalent to”, or “of the exact order of”  is formalized by the symbol Theta, while O is a simpler and more familiar symbol that evokes the word “order”. The most common misuse I have seen is exactly this use of O when Theta is meant. The principle that the most important and useful concepts should have the simplest notation implies that O would be better for computer science use. However, that would make it inconsistent with mathematical use where big-O is definitely used for upper bounds only.

The main mistake seen (also mentioned by Knuth) apart from the perhaps excusable one above is not excusable: it is the use of O to discuss lower bounds as in the example: “Algorithm A is better than Algorithm B because A is known to  run in time O(n log n) and B in time O(n^2)”. If O is replaced by Theta, then this might make sense, but the literature is full of algorithms whose O-bounds are not asymptotically tight – how do we know B doesn’t run in time O(n) also? Again, if you think that O means Theta, this mistake is an easy one to make.

The logic of journalism?

From BBC:

Intelligence tests are as much a measure of motivation as they are of mental ability, says research from the US.

Researchers from Pennsylvania found that a high IQ score required both high intelligence and high motivation but a low IQ score could be the result of a lack of either factor.

It looks like “P implies A and B” BUT “not A or not B implies not P”. This seems a strange way to write a news story, since the second statement is logically equivalent to the first.

Another interpretation is that the event “low IQ score” is not the complement of  the event “high IQ score”. This gives a little more content: “non-high” is implied a priori by lack of either factor, but in fact “low” is implied.

Of course “could” might mean that “not A and not B is not inconsistent with not P”, but that would be very much less newsworthy. Perhaps we need a standard language for journalists to deal with such issues.

Christopher Hitchens by Martin Amis

I have conflicting feelings about the other side of the “two cultures” divide. Often the relentless verbiage used to disguise the trite opinions irritates me – why can’t they just use evidence, and write systematically so that people can understand, eschewing the verbal pyrotechnics? However, the beauty sometimes does attract me despite such misgivings.

Martin Amis has written a kind of obituary for Christopher Hitchens, who is not yet dead, but expected to be so soon. Not only is it very interesting and attractively written, it is clearly motivated by real feelings of friendship.

Wikileaks

There is too much to say, but no time to do it. Wikileaks, whatever the motivation and personal foibles of the people involved, has done the world a service and committed no crime of which I am aware. The reaction of politicians especially in the USA has been very revealing, and the withdrawal of support by companies such as Visa, Paypal, Mastercard and Amazon, presumably under political pressure, has been extremely disappointing. Too few people seem to be drawing the correct conclusion: if you don’t want people to find out that you have done bad things, don’t do them! I am still trying to work out how best to involve myself in this issue. For now, here are some links:

WLCentral information clearinghouse

List of Wikileaks mirror sites

Background piece on Assange and Wikileaks  by Raffi Khatchadourian

Julian Assange has made us all safer by Johann Hari

Blog of Glenn Greenwald at Salon.com

The three faces of Uncle Sam by Michael Brenner

The decline and fall of the American empire by Alfred W. McCoy

State Department Humour (if only it wasn’t unintentional)

University rankings

For lack of a suitable alternative venue, I am putting this opinion piece, destined for a University of Auckland audience, here. Some interesting references related to this topic:
http://en.wikipedia.org/wiki/College_and_university_rankings
http://www.insidehighered.com/blogs/globalhighered/

Recently a vast increase in attempts to rank universities worldwide according to various criteria has provoked debate. Despite their deficiencies, they seem unlikely to disappear, given their usefulness for outsiders (such as international students) in making basic quality judgments (although some universities have apparently boycotted some rankings). Some of the many issues involved are: what are they measuring? how accurate and unbiased is the data? how much is factual and how much opinion? how is the data aggregated? can the rankings be manipulated? I want to focus here (given space limitations) on a few of these, and exhort us all to apply our scholarly expertise to help improve the calculation and interpretation of such rankings.

The most famous rankings are probably those conducted by: Times Higher (from 2010 with Thomson Reuters using different methodology, formerly with Quacquarelli Symonds); QS (continuing from 2010 with the same methodology); Shanghai Jiao Tong. The first two aim to measure aspects other than research. Other, research-only, rankings seen as reasonably credible are conducted by Taiwan’s HEEACT, University of Leiden, and very recently a group from University of Western Australia. Most rank at the level of university, and also (for research) at the faculty/discipline level.  To give an idea of the variability, in 2010 under QS ranks UoA 68th; THE 145th; HEEACT 287th; Leiden 272nd; UWA 172nd.

The methodology of these rankings is similar. Several numerical measures are computed (either objectively or from opinion surveys), normalized in some way, and aggregated according to various weightings, in order to arrive at a single number which can be used for ranking. These final numbers are affected by quality of the data, institutional game-playing, and the chosen weights, in addition to the actual metrics used.

Many different objective metrics are used, many of which can be manipulated by institutions and have been, often notoriously, in recent years. They all measure (sometimes subtly) different things: for example, citation data can measure overall citation impact (favouring large institutions), impact per staff member, relative impact (field-dependent), and/or use statistics such as $h$-index, $g$-index, etc. The citation measures used by various rankings have ranked UoA between 99th and around 300th in 2010. Measurement of teaching performance by any objective criteria  is considered extremely difficult, and very crude measures (such as student-staff ratios) are used.

Subjective measures have their own problems. THE and QS use opinion surveys to measure reputation. The alleged positive aspect of this is that a university’s reputation is difficult for the institution itself (or a competitor) to  change by strategic action (manipulating the ranking). The negative aspect is that the institution’s real improvements may be very slow to be reflected in such surveys, and reputation may depend on factors other than real quality. THE gives 19.5% weight to research reputation and 15% to teaching, while QS gives 40% to overall reputation and 10% to reputation among employers. My personal experience as one of the nearly 14000 academics surveyed by THE was that I had very little confidence in the accuracy of my opinions on teaching in North American universities.

The importance of the weights is shown by the 2010 THE performance for UoA. In the 5 categories listed, with respective weights 30,2.5,30,32.5,5 and respective normalized scores 34.8, 94.3, 39.2, 71.8, 61.1, UoA achieved an overall score of 56.1. Different choices of the weights could lead to any score between 34.8 and 94.3, with consequent rankings changing from well below the top 200, to the top 20.

It is clear that substantial interpretation of the rankings is required in order to make any use of them, and the media focus on simplistic analysis using a single final rank  is not helpful to universities or to any of their stakeholders. We should, as a university and as individual researchers:

* Ensure that we understand that different rankings measure different things, and aggregation of rankings is highly problematic, and communicate this to media and stakeholders accurately.
* Be honest. Some of these rankings have been singled out for comment by the Vice-Chancellor and Deputy VC (Research) while others have been publicly ignored. We should (at least internally) look at unfavourable ones too. UoA has slowly declined in most rankings over the last few years.  Field-specific information reveals that some of our faculties score much higher than others. Let’s use the rankings  not only for advertising, but for reflection on our own performance, especially as regards research.
* Get involved to ensure that the ranking methodology is as accurate as possible. Social scientists have a role to play here, in elucidating just what each measure is supposed to measure, what are its axiomatic properties, and how the measures should be aggregated. THE and QS claim to have discussed methodology with hundreds of people. I doubt it included anyone from UoA.
* Demand transparency of methodology and timely provision of unaggregated data to the public, to enable analysis and reproduce results.  I have not discussed the Shanghai rankings in detail because their results are apparently unreproducible.
* Demand that ranking organizations justify the costs to the university of data provision. As with many journals, universities supply the data for free and private companies then control it and sell it back to us. Why should we tolerate it?