By Dr Marcel Beemster, Chief Technical Officer, Solid Sands
Advanced compiler optimisations are not always robust and well-tested. Recent experiments with optimisation testing have uncovered errors in every compiler technology available, leading to the conclusion that advanced optimisation testing is currently an underdeveloped skill of compiler developers, requiring urgent action.
Compiler optimisations have huge economic value. Comparing un-optimised with optimised code demonstrates fifteen-fold faster execution speed of the generated programme, following optimisation. That is a large factor, but not uncommon to achieve for advanced loop optimisations, such as vectorisation. As for economics, fifteen times greater execution efficiency means a slower hence cheaper target processor, with lower power consumption and heat dissipation, and potentially a smaller system size – valued in almost all embedded applications.
For a typical compiler, over a half of its source code is related to optimisation; being a significant part, errors do occur.
When writing any test, it is preferable to start from the language specification – a not so straightforward task for optimisations; from C/C++ language definition point of view, optimisations hardly exist. In C11:188.8.131.52, it states: “The semantic descriptions in this International Standard describe the behaviour of an abstract machine in which issues of optimisation are irrelevant.”
The language definition specifies the behaviour of every particular language construct, but it does not specify how or when that behaviour is met. Optimisation is a so-called “non-functional” requirement, making it hard – if not impossible – to verify optimisations against a specification.
To avoid these challenges, we developed new optimisation tests in our compiler test suite SuperTest. As an example, a text search for tail recursion in the SuperTest suites immediately reports about ten tests, excluding those that accidentally test for tail recursion or were not documented as doing so. Because of the nature of compilers as a pipeline of steps, every test is exposed to all components of the pipeline, including all optimisation stages.
This means that the chance of tests unintentionally hitting optimisations is high, which is what we see. The weak link is that this is not good enough to meet the formal requirements of functional safety standards, and rightly so. These standards demand a less ‘accidental’ approach, which requires a rigid framework to link tests to the optimisation requirements, and having this framework helps.