These sorts of things are fun and interesting. Compiler optimizations fall into two categories:
1. organized data flow analysis
2. recognizing a pattern and replacing it with a faster version
The first is very effective over a wide range of programs and styles, and is the bulk of the actual transformations. The second is a never-ending accumulation of patterns, where one reaches diminishing returns fairly quickly.
The example in the linked article is very clever and fun, but not really of much value (I've never written a loop like that in 45 years). As mentioned elsewhere "Everyone knows the Gauss Summation formula for sum of n integers i.e. n(n+1)/2" and since everyone knows it why not just write that instead of the loop!
Of course one could say that for any pattern, like replacing i*2 with i<<1, but those pattern replacements are very valuable because they are generated by high level generic coding.
And you could say I'm just being grumpy about this because my optimizer does not do this particular optimization. Fair enough!
It's not clear to me what optimizations the compiler actually did here. Years ago, I worked on a niche compiler, and was routinely surprised by what the optimizer was able to figure out; despite having personally written most of the optimization transformations myself.
I can't actually speak to the specifics here but usually this is "idiom recognition", that is, it just notices that the pattern is there and transforms it directly.
It might have more value than you think. If you look up SCEV in LLVM you'll see it's primarily used for analysis and it enables other optimizations outside of math loops that, by themselves, probably don't show up very often.
If you want recognize all the common patterns, the code can get very verbose. But it's all still just one analysis or transformation, so it would be artificial to split into multiple files. I haven't worked much in llvm, but I'd guess that the external interface to these packages is pretty reasonable and hides a large amount of the complexity that took 16kloc to implement
If you donāt rely on IDE features or completion plugins in an editor like vim, it can be easier to navigate tightly coupled complexity if it is all in one file. You canāt really scan it or jump to the right spot as easily as smaller files, but in vim searching for the exact symbol under the cursor is a single character shortcut, and that only works if the symbol is in the current buffer. This type of development works best for academic style code with a small number (usually one or two) experts that are familiar with the implementation, but in that context itās remarkably effective. Not great for merge conflicts in frequently updated code though.
If it was 16K lines of modular "compositional" code, or a DSL that compiles in some provably-correct way, that would make me confident. A single file with 16K lines of -- let's be honest -- unsafe procedural spaghetti makes me much less confident.
Compiler code tends to work "surprisingly well" because it's beaten to death by millions of developers throwing random stuff at it, so bugs tend to be ironed out relatively quickly, unless you go off the beaten path... then it rapidly turns out to be a mess of spiky brambles.
The Rust development team for example found a series of LLVM optimiser bugs related to (no)aliasing, because C/C++ didn't use that attribute much, but Rust can aggressively utilise it.
I would be much more impressed by 16K lines of provably correct transformations with associated Lean proofs (or something), and/or something based on EGG: https://egraphs-good.github.io/
On the other end of the optimizer size spectrum, a surprising place to find a DSL is LuaJITās āFOLDā stage: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/lj_opt_fold.c (itās just pattern matching, more or less, that the DSL compiler distills down to a perfect hash).
Part of the issue is that it suggests that the code had a spaghettified growth; it is neither sufficient nor necessary but lacking external constraints (like an entire library developed as a single c header) it suggests that code organisation is not great.
Hardware is often spaghetti anyway. There are a large number of considerations and conditions that can invalidate the ability to use certain ops, which would change the compilation strategy.
The idea of good abstractions and such falls apart the moment the target environment itself is not a good abstraction.
I find the real question: are all 16,000 of those lines require to implement the optimization? How much of that is dealing with LLVMās internal representation and the varying complexity of LLVMās other internal structure?
The beginning of that article is slightly wrong: the compiler should compute N(N-1)/2 (and does), because the original code adds up all the numbers from 0 to N excluding N. The usual formulation in math includes the upper bound: the sum of integers from 1 to N, including N, is N(N+1)/2, so you have to replace N by (N-1) if you want a formula for the sum where the last number is N-1.
Couldn't the compiler optimise this still? Make two versions of the function, one with constant folding and one without. Then at runtime, check the value of the parameter and call the corresponding version.
I'm once again surprised at GCC being slower than clang. I would have thought that GCC, which had a 20? year head start would've made faster code. And yet, occasionally I look into the assembly and go "what are you doing?" And the same flags + source into clang is better optimized or uses better instructions or whatever. One time it was bit extraction using shifts. Clang did it in 2 steps: shift left, shift right. GCC did it in 3 I think? I think it maybe shifted right first or maybe did a logical instead of arithmetic and then sign extended. Point is, it was just slower.
I'm not. GCC started out as a work of idealistic licensing purists and was deliberately "obfuscated" to make it hard to extend and embed. That stance has since been softened considerably, but the code generator is still far more complex than it needs to be, and I think that has made it harder to modify for efficiency. Clang is far less ideology-focused and its structure makes implementing optimisations easier.
On the other hand, I find MSVC and especially ICC output to be quite decent, although I have never seen their source code.
Having inspected the output of compilers for several decades, it's rather easy to tell them apart.
GCC and Clang are largely similar when it comes to performance as each implements passes the other does not. Itās always possible to find examples where they optimize a piece of code differently and one comes out ahead of the other.
Compiler know-how and resources available during compilations made very signicant progress between gcc and LLVM/clang era.
gcc was and is an incredible achievement, but it is traditionally considered difficult to implement many modern compiler techqniques in it. It's at least unpleasant, let's put it this way.
Not sure whether this is generally true. GCC appears to have similar optimizations and I personally find LLVM's code much more intimidating. But it is certainly true that LLVM seems to see more investment. I assume the license may also play a role. For comparison, here is some related code:
Did it involve bitfields? GCC is notoriously bad at optimizing them. There are some target-specific optimizations, but pretty much nothing in the middle-end.
What's actually way cooler about this is that it's generic. Anybody could pattern match the "sum of a finite integer sequence" but the fact that it's general purpose is really awesome.
Graph coloring is NP-hard so it would be very difficult to replace it with an O(1) algorithm.
If you mean graph coloring restricted to planar graphs, yes it can always be done with at most 4 colors. But it could still be less, so the answer is not always the same.
(I know it was probably not a very serious comment but I just wanted to infodump about graph theory.)
This is really bluring the line between implementation and specification. You may think you're writing the implementation but it is really a proxy for the specification. In other words, the compiler creating an illusion of an imperative machine.
I will admit I was initially surprised Matt was not already familiar with this behavior given his reputation. I remember discovering it while playing with llvm intermediate representation 10 years ago in college. I would never have considered myself very knowledgeable about modern compilers, and have never done any serious performance work. In that case it had solved a recursion to a simple multiplication, which completely surprised me. The fact that Matt did not know this makes me think this pass may only work on relatively trivial problems that he would never have written in the first place, and therefore never have witnessed the optimization.
A hard problem in optimization today is trying to fit code into the things complex SSE-type instructions can do. Someone recently posted an example where they'd coded a loop to count the number of one bits in a word, and the compiler generated a "popcount" instruction. That's impressive.
Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
Gauss supposedly did it when he was 7. The hardest part for the compiler is figuring out that you have a loop that computes that sum and does nothing else important.
It's something we saw in highschool, I would expect anyone with a CS degree to recognize this optimization.
I barely know anything about compiler optimization, so I have no clue whether a compiler applying this optimization is surprising or something trivial.
Yes, that was clear to me from the article and the discussion. My point is that to someone who knows about Gauss' formula but doesn't know anything about compilers might not understand what the fuss is about.
ābasic and essentialā are interesting ways to describe the field of compiler optimization research.
Are you suggesting that the discovery and implementation of SCEV in LLVM is basic and essential? Or that summing integers in a range is basic and essential?
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
He thinks it makes him look clever, or more likely subtlety wants people to think "wow, this guy thinks something is obvious when Matt Godbolt found it surprising".
This kind of question is entirely useless in an interview. It's just a random bit of trivia that either a potential hire happen to have come across, or happens to remember from math class.
I guess what's surprising here is that compilers are able to perform those optimizations systematically on arbitrary code, not the optimizations themselves, which should be obvious to a human.
Have you considered that maybe Matt isnāt all that surprised by this optimization, but he is excited about how cool it is, and he wants readers of all backgrounds to also be excited about how cool it is, and is just feigning surprise so that he can share a sense of excitement with his audience?
Whether they get the question exactly right and can pinpoint the specific compiler passes or algebraic properties responsible for reductions like this is totally irrelevant and not what youāre actually looking for or asking about. Itās a very good jumping point for a conversation about optimization and testing whether theyāre the type of developer who has ever looked at the assembly produced in their hotpath or not.
Anyone who dumbly suggests that loops in source code will always result in loops in assembly doesnāt have a clue. Anyone who throws their hands up and says, āI have no idea, but I wonder if thereās some loop invariant or algebraic trick that can be used to optimize this, letās think about it out loud for a bitā has taken a compiler class and gets full marks. Anyone who says, āI dunno, letās see what godbolt does and look through the llvm-opt paneā gets an explicit, āhire this oneā in the feedback to the hiring manager.
Itās less about what they know and more about if they can find out.
So in other words, it isn't "basic and essential optimizations" that you would expect even a junior engineer to know (as your comment implies), but a mechanism to trigger a conversation to see how they think about problems. In fact, it sounds like something you wouldn't expect them to know.
I didnāt write the GP comment. I wouldnāt call this basic and essential, but I would say that compilers have been doing similar loop simplifications for quite some time. Iād expect any mid to senior developer with C/C++ on their resume to at least consider the possibility that the compiler can entirely optimize away a loop.
> In fact, it sounds like something you wouldn't expect them to know.
Iād go a step further, I donāt think anyone, no matter how experienced they are, can confidently claim that optimized assembly will or wonāt be produced for a given loop. Thatās why the best answer above is, āI dunnoā. If performance really matters, you have to investigate and confirm that youāre getting good code. You can have an intuition for what you think might happen, and thatās a useful skill to have on its own, but itās totally useless if you donāt also know how to confirm your suspicions.
My question is in the context of doing those optimizations yourself, understanding what can be done to make the code more efficient and how to code it up, not the compiler engineering to make that happen.
Yikes, gross. Thatās like an option of last resort IMO. Iād rather maintain the clean loop-based code unless I had evidence that the compiler was doing the wrong thing and it was in my critical path.
The compiler is only able to perform certain optimizations that have no observable behaviour.
For example it can only parallelize code which is inherently parallelizable to begin with, and unless you design your algorithm with that in mind, it's unlikely to be.
My belief is that it's better to be explicit, be it with low-level or high-level abstractions.
My interview aims to assess whether the candidate understands that the dependency of each iteration on the previous one prevents effective utilization of a superscalar processor, knows the ways to overcome that, and whether the compiler is able to optimize that automatically, and if so when it absolutely cannot and why.
I generally focus more on sum of arbitrary data, but I used to also ask about a formulaic sum (linear to constant time) as an example of something a compiler is unlikely to do.
My thinking is that I expect good engineers to be able to do those optimizations themselves rather than rely on compilers.
Since GCC is lacking such an essential optimization, you should consider have one of your junior interviewee contribute this basic optimization mainline.
What type of positions are you interviewing for? Software development is a big tent and I don't think this would be pertinent in a web dev interview, for example.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, thereās no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
Iām pretty sure making an algorithm that converts loops to close forms (Iām sure it detects much more than just a summation) is a little bit complicated.
Maybe you have much more experience than Mr Godbolt in compiliers.
Nothing is surprising once you know the answer. It takes some mental gymnastics to put yourself in someone else's shoes before they discovered it and thus making it less "basic".
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.
This particular case isn't really due to pattern matching -- it's a result of a generic optimization that evaluates the exit value of an add recurrence using binomial coefficients (even if the recurrence is non-affine). This means it will work even if the contents of the loop get more exotic (e.g. if you perform the sum over x * x * x * x * x instead of x).
Doing something like that with a pattern is obvious, but also useless, as it will catch very limited cases. The example presented, is known there is a closed form (itās believed Gauss even discovered it being 6 yo). Iām sure this optimization will catch many other things, so is not trivial at all.
10 years is not a lot. Is almost āyesterdayā things being done in a field 10 years old, can still surprise experts in the field. With 30+ years experience I still find relatively new things, that are maybe 15 yo.
In topics like compiler optimization, is not like there are many books which describe this kind of algorithms.
These sorts of things are fun and interesting. Compiler optimizations fall into two categories:
1. organized data flow analysis
2. recognizing a pattern and replacing it with a faster version
The first is very effective over a wide range of programs and styles, and is the bulk of the actual transformations. The second is a never-ending accumulation of patterns, where one reaches diminishing returns fairly quickly.
The example in the linked article is very clever and fun, but not really of much value (I've never written a loop like that in 45 years). As mentioned elsewhere "Everyone knows the Gauss Summation formula for sum of n integers i.e. n(n+1)/2" and since everyone knows it why not just write that instead of the loop!
Of course one could say that for any pattern, like replacing i*2 with i<<1, but those pattern replacements are very valuable because they are generated by high level generic coding.
And you could say I'm just being grumpy about this because my optimizer does not do this particular optimization. Fair enough!
It's not clear to me what optimizations the compiler actually did here. Years ago, I worked on a niche compiler, and was routinely surprised by what the optimizer was able to figure out; despite having personally written most of the optimization transformations myself.
I can't actually speak to the specifics here but usually this is "idiom recognition", that is, it just notices that the pattern is there and transforms it directly.
It might have more value than you think. If you look up SCEV in LLVM you'll see it's primarily used for analysis and it enables other optimizations outside of math loops that, by themselves, probably don't show up very often.
You might be right.
The code that does this is here, if anyone is curious:
https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
Almost 16000 lines in a single source code file. I find this both admirable and unsettling.
Does it really matter where the lines are? 16,000 lines is still 16,000 lines.
Even though I do find your indifference refreshing I must say: it does matter for quite a few people.
If you want recognize all the common patterns, the code can get very verbose. But it's all still just one analysis or transformation, so it would be artificial to split into multiple files. I haven't worked much in llvm, but I'd guess that the external interface to these packages is pretty reasonable and hides a large amount of the complexity that took 16kloc to implement
If you donāt rely on IDE features or completion plugins in an editor like vim, it can be easier to navigate tightly coupled complexity if it is all in one file. You canāt really scan it or jump to the right spot as easily as smaller files, but in vim searching for the exact symbol under the cursor is a single character shortcut, and that only works if the symbol is in the current buffer. This type of development works best for academic style code with a small number (usually one or two) experts that are familiar with the implementation, but in that context itās remarkably effective. Not great for merge conflicts in frequently updated code though.
... yes.
If it was 16K lines of modular "compositional" code, or a DSL that compiles in some provably-correct way, that would make me confident. A single file with 16K lines of -- let's be honest -- unsafe procedural spaghetti makes me much less confident.
Compiler code tends to work "surprisingly well" because it's beaten to death by millions of developers throwing random stuff at it, so bugs tend to be ironed out relatively quickly, unless you go off the beaten path... then it rapidly turns out to be a mess of spiky brambles.
The Rust development team for example found a series of LLVM optimiser bugs related to (no)aliasing, because C/C++ didn't use that attribute much, but Rust can aggressively utilise it.
I would be much more impressed by 16K lines of provably correct transformations with associated Lean proofs (or something), and/or something based on EGG: https://egraphs-good.github.io/
On the other end of the optimizer size spectrum, a surprising place to find a DSL is LuaJITās āFOLDā stage: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/lj_opt_fold.c (itās just pattern matching, more or less, that the DSL compiler distills down to a perfect hash).
Part of the issue is that it suggests that the code had a spaghettified growth; it is neither sufficient nor necessary but lacking external constraints (like an entire library developed as a single c header) it suggests that code organisation is not great.
Hardware is often spaghetti anyway. There are a large number of considerations and conditions that can invalidate the ability to use certain ops, which would change the compilation strategy.
The idea of good abstractions and such falls apart the moment the target environment itself is not a good abstraction.
I find the real question: are all 16,000 of those lines require to implement the optimization? How much of that is dealing with LLVMās internal representation and the varying complexity of LLVMās other internal structure?
I do too, but I'm pretty sure I've seen worse.
Thank you, bumholes
That one is called scalar evolution, llvm abbreviates it as SCEV. The implementation is relatively complicated.
More similar optimizations: https://matklad.github.io/2025/12/09/do-not-optimize-away.ht...
The beginning of that article is slightly wrong: the compiler should compute N(N-1)/2 (and does), because the original code adds up all the numbers from 0 to N excluding N. The usual formulation in math includes the upper bound: the sum of integers from 1 to N, including N, is N(N+1)/2, so you have to replace N by (N-1) if you want a formula for the sum where the last number is N-1.
Couldn't the compiler optimise this still? Make two versions of the function, one with constant folding and one without. Then at runtime, check the value of the parameter and call the corresponding version.
Yes, a sufficiently smart compiler can always tell youāre doing a benchmark and delete it. Itās just unlikely.
Compilers can add way more closed forms. Would it be worth it?
https://en.wikipedia.org/wiki/Wilf%E2%80%93Zeilberger_pair
I'm once again surprised at GCC being slower than clang. I would have thought that GCC, which had a 20? year head start would've made faster code. And yet, occasionally I look into the assembly and go "what are you doing?" And the same flags + source into clang is better optimized or uses better instructions or whatever. One time it was bit extraction using shifts. Clang did it in 2 steps: shift left, shift right. GCC did it in 3 I think? I think it maybe shifted right first or maybe did a logical instead of arithmetic and then sign extended. Point is, it was just slower.
I'm not. GCC started out as a work of idealistic licensing purists and was deliberately "obfuscated" to make it hard to extend and embed. That stance has since been softened considerably, but the code generator is still far more complex than it needs to be, and I think that has made it harder to modify for efficiency. Clang is far less ideology-focused and its structure makes implementing optimisations easier.
On the other hand, I find MSVC and especially ICC output to be quite decent, although I have never seen their source code.
Having inspected the output of compilers for several decades, it's rather easy to tell them apart.
GCC and Clang are largely similar when it comes to performance as each implements passes the other does not. Itās always possible to find examples where they optimize a piece of code differently and one comes out ahead of the other.
Compiler know-how and resources available during compilations made very signicant progress between gcc and LLVM/clang era.
gcc was and is an incredible achievement, but it is traditionally considered difficult to implement many modern compiler techqniques in it. It's at least unpleasant, let's put it this way.
Not sure whether this is generally true. GCC appears to have similar optimizations and I personally find LLVM's code much more intimidating. But it is certainly true that LLVM seems to see more investment. I assume the license may also play a role. For comparison, here is some related code:
https://github.com/gcc-mirror/gcc/blob/master/gcc/tree-chrec... https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
GCC has almost the same modern compiler techniques implemented.
Did it involve bitfields? GCC is notoriously bad at optimizing them. There are some target-specific optimizations, but pretty much nothing in the middle-end.
It did, yes. On an architecture without bit field extracts.
What's actually way cooler about this is that it's generic. Anybody could pattern match the "sum of a finite integer sequence" but the fact that it's general purpose is really awesome.
Itās neat. I wonder if someone attempted detecting a graph coloring problem to replace it with a constant.
Graph coloring is NP-hard so it would be very difficult to replace it with an O(1) algorithm.
If you mean graph coloring restricted to planar graphs, yes it can always be done with at most 4 colors. But it could still be less, so the answer is not always the same.
(I know it was probably not a very serious comment but I just wanted to infodump about graph theory.)
This is really bluring the line between implementation and specification. You may think you're writing the implementation but it is really a proxy for the specification. In other words, the compiler creating an illusion of an imperative machine.
I will admit I was initially surprised Matt was not already familiar with this behavior given his reputation. I remember discovering it while playing with llvm intermediate representation 10 years ago in college. I would never have considered myself very knowledgeable about modern compilers, and have never done any serious performance work. In that case it had solved a recursion to a simple multiplication, which completely surprised me. The fact that Matt did not know this makes me think this pass may only work on relatively trivial problems that he would never have written in the first place, and therefore never have witnessed the optimization.
He was: he brought up the very same example in a talk in 2017.
https://www.youtube.com/watch?v=bSkpMdDe4g4&t=2640
Ah that makes much more sense. I guess he means the optimization is surprising when you first discover it, which it certainly was for me!
That's neat.
A hard problem in optimization today is trying to fit code into the things complex SSE-type instructions can do. Someone recently posted an example where they'd coded a loop to count the number of one bits in a word, and the compiler generated a "popcount" instruction. That's impressive.
It may be a different post, but I covered this earlier this month in the same series of blog posts/YouTube videos.
A lot of hardcoding, making expression consistent, e.g transforming a+3 into 3+a for easier pattern matching
The first thing I had in mind was: the final answer needed to be /2. keeping the number before dividing not overflowing needs some tedious work
It's not very tedious. Instead of dividing the product by 2, you can just divide whichever of x or x+1 is even by 2 before multiplying.
Only thing that surprised me was that GCC didn't manage to optimize it. I expected it to be able to do so.
If you now have a function where you call this one with an integer literal, you will end up with a fully inlined integer answer!
Could do that whether SCEVād or not with C++20 consteval, lol.
Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
Gauss supposedly did it when he was 7. The hardest part for the compiler is figuring out that you have a loop that computes that sum and does nothing else important.
Unfortunately I donāt have a hiring pipeline filled with Gausses
It's something we saw in highschool, I would expect anyone with a CS degree to recognize this optimization.
I barely know anything about compiler optimization, so I have no clue whether a compiler applying this optimization is surprising or something trivial.
Implementing this in a compiler is nontrivial.
Yes, that was clear to me from the article and the discussion. My point is that to someone who knows about Gauss' formula but doesn't know anything about compilers might not understand what the fuss is about.
Yeah. Pretty basic. Just 14k LOC
https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
I would've assumed it was hardcoded. Not a generic solution for any loop involving a recurring variable.
https://www.npopov.com/2023/10/03/LLVM-Scalar-evolution.html
ābasic and essentialā are interesting ways to describe the field of compiler optimization research.
Are you suggesting that the discovery and implementation of SCEV in LLVM is basic and essential? Or that summing integers in a range is basic and essential?
I spoke in the context of coding those optimizations yourself.
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
He thinks it makes him look clever, or more likely subtlety wants people to think "wow, this guy thinks something is obvious when Matt Godbolt found it surprising".
This kind of question is entirely useless in an interview. It's just a random bit of trivia that either a potential hire happen to have come across, or happens to remember from math class.
I guess what's surprising here is that compilers are able to perform those optimizations systematically on arbitrary code, not the optimizations themselves, which should be obvious to a human.
Trying to look smart by dissing Matt is not a good idea.
Have you considered that maybe Matt isnāt all that surprised by this optimization, but he is excited about how cool it is, and he wants readers of all backgrounds to also be excited about how cool it is, and is just feigning surprise so that he can share a sense of excitement with his audience?
Itās writing for effect.
Everybody who has seen any video of Matt knows that.
You can be surprised about things you know for years.
For example I am surprised every time I think about js coalescing even tougth I know it for decades.
The one that always gets me is what Truffle/JRuby was capable of, ten years ago:
https://x.com/chrisgseaton/status/619885182104043520
https://x.com/chrisgseaton/status/619888649866448896
I dunno he can honestly be quite a jerk sometimes
AKA you get exactly the oppositeā¦
Whether they get the question exactly right and can pinpoint the specific compiler passes or algebraic properties responsible for reductions like this is totally irrelevant and not what youāre actually looking for or asking about. Itās a very good jumping point for a conversation about optimization and testing whether theyāre the type of developer who has ever looked at the assembly produced in their hotpath or not.
Anyone who dumbly suggests that loops in source code will always result in loops in assembly doesnāt have a clue. Anyone who throws their hands up and says, āI have no idea, but I wonder if thereās some loop invariant or algebraic trick that can be used to optimize this, letās think about it out loud for a bitā has taken a compiler class and gets full marks. Anyone who says, āI dunno, letās see what godbolt does and look through the llvm-opt paneā gets an explicit, āhire this oneā in the feedback to the hiring manager.
Itās less about what they know and more about if they can find out.
So in other words, it isn't "basic and essential optimizations" that you would expect even a junior engineer to know (as your comment implies), but a mechanism to trigger a conversation to see how they think about problems. In fact, it sounds like something you wouldn't expect them to know.
I didnāt write the GP comment. I wouldnāt call this basic and essential, but I would say that compilers have been doing similar loop simplifications for quite some time. Iād expect any mid to senior developer with C/C++ on their resume to at least consider the possibility that the compiler can entirely optimize away a loop.
> In fact, it sounds like something you wouldn't expect them to know.
Iād go a step further, I donāt think anyone, no matter how experienced they are, can confidently claim that optimized assembly will or wonāt be produced for a given loop. Thatās why the best answer above is, āI dunnoā. If performance really matters, you have to investigate and confirm that youāre getting good code. You can have an intuition for what you think might happen, and thatās a useful skill to have on its own, but itās totally useless if you donāt also know how to confirm your suspicions.
My question is in the context of doing those optimizations yourself, understanding what can be done to make the code more efficient and how to code it up, not the compiler engineering to make that happen.
Yikes, gross. Thatās like an option of last resort IMO. Iād rather maintain the clean loop-based code unless I had evidence that the compiler was doing the wrong thing and it was in my critical path.
The compiler is only able to perform certain optimizations that have no observable behaviour.
For example it can only parallelize code which is inherently parallelizable to begin with, and unless you design your algorithm with that in mind, it's unlikely to be.
My belief is that it's better to be explicit, be it with low-level or high-level abstractions.
My interview aims to assess whether the candidate understands that the dependency of each iteration on the previous one prevents effective utilization of a superscalar processor, knows the ways to overcome that, and whether the compiler is able to optimize that automatically, and if so when it absolutely cannot and why.
I generally focus more on sum of arbitrary data, but I used to also ask about a formulaic sum (linear to constant time) as an example of something a compiler is unlikely to do.
My thinking is that I expect good engineers to be able to do those optimizations themselves rather than rely on compilers.
Since GCC is lacking such an essential optimization, you should consider have one of your junior interviewee contribute this basic optimization mainline.
For Matt, the creator of compiler explorer, those are surprises.
For you are essentials.
You and the juniors you hire must have a deeper knoledge than him.
You don't have to be an expert in compiler design to make godbolt in fairness, although he does know a lot.
I spend a lot of time looking at generated assembly and there are some more impressive ones.
As i said you must have a deeper knoledge than him.
It would be great if you shared it with the world like Matt does instead of being smug about it.
What type of positions are you interviewing for? Software development is a big tent and I don't think this would be pertinent in a web dev interview, for example.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, thereās no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
A very good point! I didnāt think of that.
This is why you have options like -ffast-math, to allow more aggressive but not 100% identical outcome optimizations.
Iām pretty sure making an algorithm that converts loops to close forms (Iām sure it detects much more than just a summation) is a little bit complicated.
Maybe you have much more experience than Mr Godbolt in compiliers.
Nothing is surprising once you know the answer. It takes some mental gymnastics to put yourself in someone else's shoes before they discovered it and thus making it less "basic".
Everyone knows the Gauss Summation formula for sum of n integers i.e. n*(n+1)/2 but it is just nice to see it in GCC vs. Clang.
https://xkcd.com/1053/
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.
This particular case isn't really due to pattern matching -- it's a result of a generic optimization that evaluates the exit value of an add recurrence using binomial coefficients (even if the recurrence is non-affine). This means it will work even if the contents of the loop get more exotic (e.g. if you perform the sum over x * x * x * x * x instead of x).
Doing something like that with a pattern is obvious, but also useless, as it will catch very limited cases. The example presented, is known there is a closed form (itās believed Gauss even discovered it being 6 yo). Iām sure this optimization will catch many other things, so is not trivial at all.
> I love that despite working with compilers for more than twenty years, they can still surprise and delight me.
This kind of optimization, complete loop removal and computing the final value for simple math loops, is at least 10 years old.
10 years is not a lot. Is almost āyesterdayā things being done in a field 10 years old, can still surprise experts in the field. With 30+ years experience I still find relatively new things, that are maybe 15 yo.
In topics like compiler optimization, is not like there are many books which describe this kind of algorithms.
Learning something old can be surprising. Enjoying that learning can be delightful.
Seems like the author is both surprised and delighted with an optimization they learned of today. Surely youāve been in the same situation before.
This exact content was posted a few months ago. Is this AI or just a copy paste job?
You're probably thinking of another post (https://xania.org/202512/11-pop-goes-the-weasel-er-count) where an entire loop was optimized to a single instruction
This exact content was only posted today? :)