I'm often skeptical of the desire to create a lot of passes. In the early Vale compiler, and in the Mojo compiler, we were paying a lot of interest on tech debt because features were put in the wrong pass. We often incurred more complexity trying to make a concept work across passes than we would have had in fewer, larger passes. I imagine this also has analogies to microservices in some way. Maybe other compiler people can weigh in here on the correct number/kind of passes.
munificent 4 hours ago [-]
I work on Dart. I don't work directly on the compilers much, but I've gathered from talking to my teammates who do that, yeah, generally fewer passes is better.
From a maintenance perspective, it's really appealing to have a lot of small clearly defined passes so that you can have good separation of concerns. But over time, they often end up needing to interact in complex ways anyway.
For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Or maybe you want to do error reporting separately from resolution and type checking. But in practice, a lot of errors come from resolution failure, so the resolver has to do almost all of the work to report that error, stuff that resulting data somewhere the next pass can get to it, and then the error reporter looks for that and reports it.
Instead of nice clean separate passes, you sort of end up with one big tangled pass anyway, but architected poorly.
Also, the performance cost of multiple passes is not small. This is especially true if each pass is actually converting to a new representation.
spankalee 2 hours ago [-]
Hey - I used to be on your team long ago :)
> For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Some of this is language design though. If you make it a requirement that scope analysis can be done in isolation (so that it's parallelizable), then you design things like imports and classes so that you never have identifiers depend on types or other libraries.
I've been working on a language lately and thinking about passes - mostly where they run and what previous state they depend on - has been a big help in designing language so the compiler can be parallel, cache, and incremental friendly.
eholk 1 hours ago [-]
I used the Nano pass framework for my language when I was in grad school and loved it. I forget the exact count but I think we had 40 or 50 passes. Generally each pass either did some analysis that would be consumed by a later pass or it rewrote some higher level concept or feature in terms of more primitive operations.
Nanopass had a DSL for describing what forms you delete from each intermediate language and which forms you add. Each pass was generally pretty small then, usually just replacing one form with some set of other forms.
Be cause each pass was really small it was pretty easy to reorder them. Sometimes we figured out if we moved one optimization sooner then later passes worked better. Or we realized some analysis was useful somewhere else so we rearranged the passes again. The pass ordering made it really clear which analysis results are valid at each point in the compilation process.
pfdietz 5 hours ago [-]
Yes, and a similar question is the organization of the thing being acted on by the passes. If I understand correctly, this is in scheme and the things being acted on are trees with pointers. A performance optimized compiler, on the other hand, will probably use some sort of array-based implementation of trees.
There's also a question of data about the trees (like, a flow graph) being recomputed for each nanopass. Also expensive.
soegaard 4 hours ago [-]
Nanopass uses structures internally to represent the programs.
The Nanopass dsl just gives the user a nicer syntax to specify the transformations.
pfdietz 2 hours ago [-]
So, a conventional linked representation of a tree (but not a tree of cons cells).
armchairhacker 3 hours ago [-]
I wrote a small nanopass-style compiler (with many small passes) and had the same experience: lots of redundancy, and hard to debug because it wasn't clear how the passes interacted.
Admittedly, this framework has extensive metaprogramming (it's a Racket DSL), so it probably has much less redundancy, but is probably even harder to debug.
To an extent, it's impossible to implement nanopass in a way that's neither redundant nor confusing, without a projectional editor. Because either each pass's AST is fully redefined, which adds redundancy, or most pass's ASTs are defined in many places, which adds confusion. But my wild speculation is that one day projectional editors will gain popularity, and someone will make a compiler where one observe and debug individual passes.
onlyrealcuzzo 6 hours ago [-]
Do you have an article on lessons learned?
I'm creating a language/compiler now, and I'm quite certain that I did not have enough passes initially, but I hope I'm at a good spot now - but time will tell.
I wonder if there's some implicit wisdom that layering/modularizing incurs some communication cost that can cancel all the benefits.
jasonjmcghee 5 hours ago [-]
This is a question folks are asking about in terms of organization building too.
Bottlenecks are changing and it's pretty interesting.
LegNeato 4 hours ago [-]
Why do passes anymore when we have invented egraphs?
armchairhacker 2 hours ago [-]
Egraphs are for optimization passes, which operate on the same IR, and (without egraphs) may undo and/or prevent each other and are repeated for more optimization.
Nanopass is compiler passes, which each have their own IR, and run once in a fixed sequence.
Egraphs also require the IR to be defined a specific way, which prevents some optimizations. My understanding of https://github.com/bytecodealliance/rfcs/blob/main/accepted/... is that cranelift’s egraph optimizations are pure expression reordering/duplication/deduplication and rewrite rules.
skybrian 3 hours ago [-]
It's the first I've heard of them. Looks like the research goes back to 1980, but good libraries seem fairly new?
I agree with the notion that having multiple passes makes compilers easier to understand and maintain but finding the right number of passes is the real challenge here.
The optimal number of passes/IRs depends heavily on what language is being compiled. Some languages naturally warrant this kind of an architecture that would involve a lot of passes.
Compiling Scheme for instance would naturally entail several passes.
It could look something like the following:
Wouldn't this kind of architecture yield a slower compiler, regardless of output quality? Conceptually, trying to implement the least-amount of passes with each doing as much work as possible would make more sense to me.
bjoli 2 hours ago [-]
Optimization level 2 in chez scheme does about 100 KLOC/s in my pretty modest machine, while also producing code that is pretty darn fast.
marssaxman 3 hours ago [-]
There is nothing stopping you from building an old-fashioned single-pass compiler, if compile time is your only concern. The code it generates just wouldn't be very good.
norir 50 minutes ago [-]
This highly depends on the language and your skill as a compiler writer. You can write a single pass assembler that generates great code but you have to of course write the low level code yourself (including manual register assignment). To do decent automatic register assignment, I agree you need at least two passes, but not 10 or more.
ape4 3 hours ago [-]
Can it only make compilers for Lisp-like languages
8 hours ago [-]
Rendered at 21:41:50 GMT+0000 (Coordinated Universal Time) with Vercel.
From a maintenance perspective, it's really appealing to have a lot of small clearly defined passes so that you can have good separation of concerns. But over time, they often end up needing to interact in complex ways anyway.
For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Or maybe you want to do error reporting separately from resolution and type checking. But in practice, a lot of errors come from resolution failure, so the resolver has to do almost all of the work to report that error, stuff that resulting data somewhere the next pass can get to it, and then the error reporter looks for that and reports it.
Instead of nice clean separate passes, you sort of end up with one big tangled pass anyway, but architected poorly.
Also, the performance cost of multiple passes is not small. This is especially true if each pass is actually converting to a new representation.
> For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Some of this is language design though. If you make it a requirement that scope analysis can be done in isolation (so that it's parallelizable), then you design things like imports and classes so that you never have identifiers depend on types or other libraries.
I've been working on a language lately and thinking about passes - mostly where they run and what previous state they depend on - has been a big help in designing language so the compiler can be parallel, cache, and incremental friendly.
Nanopass had a DSL for describing what forms you delete from each intermediate language and which forms you add. Each pass was generally pretty small then, usually just replacing one form with some set of other forms.
Be cause each pass was really small it was pretty easy to reorder them. Sometimes we figured out if we moved one optimization sooner then later passes worked better. Or we realized some analysis was useful somewhere else so we rearranged the passes again. The pass ordering made it really clear which analysis results are valid at each point in the compilation process.
There's also a question of data about the trees (like, a flow graph) being recomputed for each nanopass. Also expensive.
The Nanopass dsl just gives the user a nicer syntax to specify the transformations.
Admittedly, this framework has extensive metaprogramming (it's a Racket DSL), so it probably has much less redundancy, but is probably even harder to debug.
To an extent, it's impossible to implement nanopass in a way that's neither redundant nor confusing, without a projectional editor. Because either each pass's AST is fully redefined, which adds redundancy, or most pass's ASTs are defined in many places, which adds confusion. But my wild speculation is that one day projectional editors will gain popularity, and someone will make a compiler where one observe and debug individual passes.
I'm creating a language/compiler now, and I'm quite certain that I did not have enough passes initially, but I hope I'm at a good spot now - but time will tell.
https://andykeep.com/pubs/dissertation.pdf
Also see the this text:
https://www.cs.umd.edu/class/fall2025/cmsc430/Notes.html
Bottlenecks are changing and it's pretty interesting.
Nanopass is compiler passes, which each have their own IR, and run once in a fixed sequence.
Egraphs also require the IR to be defined a specific way, which prevents some optimizations. My understanding of https://github.com/bytecodealliance/rfcs/blob/main/accepted/... is that cranelift’s egraph optimizations are pure expression reordering/duplication/deduplication and rewrite rules.
https://blog.sigplan.org/2021/04/06/equality-saturation-with...
[1] The acyclic e-graph: Cranelift's mid-end optimizer https://news.ycombinator.com/item?id=47717192
The optimal number of passes/IRs depends heavily on what language is being compiled. Some languages naturally warrant this kind of an architecture that would involve a lot of passes.
Compiling Scheme for instance would naturally entail several passes. It could look something like the following:
Lexer -> Parser -> Macro Expander -> Alpha Renaming -> Core AST (Lowering) -> CPS Transform -> Beta / Eta Reduction -> Closure Conversion -> Codegen