I went to a talk by Mike Powell (Cambridge University) yesterday. He is a well-known worker in approximation theory and optimization. His title, “The unimportance of theory in the development of optimization algorithms” certainly provoked me to attend.
As I understood his thesis, many practically useful algorithms have been developed for optimization problems (we are talking about continuous stuff here, not the discrete algorithms beloved of CS theorists), despite a lack of proper convergence theory. People (may) have been deterred from publishing good algorithms for lack of such theory. Also, worst-case bounds available do not predict the running time on typical problem instances.
This of course is old news if you know about quicksort, or indeed almost any algorithm. In 2003 Robert Sedgewick told me about experiments he undertook when writing the latest version of his series of textbooks on algorithms. The known upper bounds turned out to be useless in comparing algorithms, since they were so huge compared to the typical runtime. This is why the field of analysis of algorithms has existed and developed since started by Knuth in 1962.
But there is a big difference, it seems to me. For algorithms like quicksort, correctness is easy to prove and we are only discussing running time. But for optimization algorithms as discussed above, as far as I understand it, correctness can only be established by convergence theory. When I raised this,, I was told that the algorithms in question had been validated by checking them on test cases. If I were relying on an algorithm for running a nuclear power station or flying a spacecraft, I would not be reassured by this.
The right approach to a poor theory is to find a better theory, not to claim that theories are not useful.