Can’t say that I agree with this talk completely, but it gets one thinking. Original formatting is indeed inelligible — slides were probably in text format. However, the original is worth to read.
Primary points in my paraphrase.
Software code can be divided by 2 categories:
Small quantity of “hot” code which is executed repeatedly through almost all run-time of a program.
Much larger quantity of “cold” code, which is executed rarely (including, for example, at the start of execution) and doesn’t affect overall performance of the software.
Nobody should care about the performance of “cold” code. A lot of code today is either written in interpreted languages (Python) or executed inside a web-browser with a huge overhead.
“Hot” code is optimized manually by the algorithm developers, because this process requires the deep knowledge of both the algorithm (roughly speaking, what properties can we exploit for the sake of performance) and the target hardware platform. Note that Intel CPU instruction set manual spans 5 volumes (2a-2d) and 2.2 thousands pages.
Optimizing compilers are very complex programs which may introduce errors in optimized code. The set of optimizing stages influences time of compilation and, for example, convenience of debugging. However, optimizing the “cold” code is useless. And for the “hot” code compiler can’t even come close to the increase of performance achievable by the specialist. The methods which the specialist can employ include caching, changing the computation sequence and formulae. Compiler can perform only semantics-preserving transformations and definitely can’t change the program a way the specialist can.
The conclusion is: further development of even more complicated optimizing compilers essentially makes no sense.
(In the final part of the talk the author cites Knuth about the perspectives of interactive program transformation systems, as one of prospective development areas. But in my opinion, the former thesis is more interesting.)