Fastbelt: The high-speed DSL toolkit for Go
Fastbelt is written entirely in Go, which marks a deliberate shift away from our usual TypeScript-based stack. This decision was not made lightly. In our previous post, we examined where Langium and Xtext start to struggle: large workspaces, heavy models, and the limitations of today’s runtimes with regard to parallelism and memory efficiency.
Fastbelt is our answer to that. It has been built from the ground up with performance, parallelism, and reduced resource usage in mind.
Our initial benchmarks show significant gains across the board, particularly when scaling up. But rather than taking our word for it, take a look at the numbers below and judge for yourself.
Performance advantages
There is little sense in calling Fastbelt a high-speed framework if we don’t have the numbers to back it up. We set up a few benchmarks on small (250KB) and large (12MB) scale workspaces to see how it fares against Langium and Xtext. Right now, we are mostly concerned with out-of-the-box performance (i.e. no language customizations). For this benchmark in particular, we used the well-known State Machine language as an example.
To the point: how much faster than Xtext & Langium are we?
We knew that Xtext is faster than Langium with regard to pure throughput. However, that was a trade-off we were taking willingly to improve developer experience and the general flexibility of the framework and runtime.
Turns out, we can do much better than both of those. With Fastbelt, we can clearly see that paying attention to performance pays off. On small workspaces, we spend roughly 12⨉ less time performing the workspace build than Xtext and 21⨉ less time than Langium. On larger workspaces, these performance improvements become even more exaggerated with 26⨉ for Xtext and 33⨉ for Langium.
Running larger benchmarks poses problems for both Xtext and Langium due to their (by default) limited heap space. Doubling the last benchmark’s workspace to 25MB already leads to Langium exceeding the 4GB memory threshold placed by default by Node.js. Increasing heap space is possible, but results in super-linear run times as the garbage collection of their respective runtimes needs to churn through more and more memory as the workspace increases in size. This is even visible on the workspaces that easily fit into memory:
Even though Go also employs a garbage collector, benchmarking Fastbelt for even larger workspaces (up to 100MB) has shown none of these issues and kept the performance stable at 0.14ms per file (5KB).
Aside from pure throughput, one of the most important metrics when looking at language server performance is the first response latency. Similar to the First Meaningful Paint in web development, when a user opens a file in their editor, they expect that the language server for that file type becomes responsive to their requests as quickly as possible. We measure this by:
- Starting the language server process
- Sending
initializeandinitializedrequests/notifications. - Sending a
textDocument/didOpenfor a small simple file. - Awaiting the first
textDocument/publishDiagnosticsnotification from the server.
Diagnostics/Validation markers are usually the first language feature seen by a user aside from syntax highlighting. Only once the initial build has finished can other LSP features, such as jumping to references or definitions, run without requiring any additional heavy computations.
We were already pretty happy with how fast Langium’s startup performance was compared to Xtext. Not too surprisingly, the natively compiled binary obtained with Fastbelt beats out Langium quite handily. Of course, these are idealized benchmarking numbers. When actually employing a language server in IDEs such as VS Code, the overhead of the plugin system will mostly eliminate the difference between Fastbelt and Langium—aside from the fact that an 8ms vs 80ms startup is essentially unnoticeable for most users anyway.
Finally, one less prominent metric that can be brought up here is the final artifact size of a language server produced by each framework:
Note that both Langium and Xtext require a runtime, something like node.js/bun and a JRE respectively. Fastbelt, as a native binary, has no further need for this. While Langium was designed to be used in VS Code/Theia and web environments, Fastbelt has no such limitations on its design.
The benchmarks can be found in this repo. The Fastbelt benchmark can be found here.
Fastbelt, Langium, Xtext – what to choose?
Langium remains our go-to choice for most language projects, and for good reason. It combines a mature, production-proven architecture with a rapidly growing ecosystem, the latest additions to which are Langium AI and Typir. Langium has been widely adopted, with millions of weekly downloads and counting, and it fits naturally into modern web applications, where many DSL-based tools live today. In practice, this means you’re not just choosing a library, but stepping into an ecosystem that covers everything from language design to rich, browser-based tooling.
Fastbelt comes into play when scale and performance stop being edge cases and become the core challenge. If you’re working with very large files, extensive workspaces, or workloads where latency and memory usage directly affect usability, Fastbelt is the solution. Building on the lessons learned from Langium and Xtext, it reimagines the architecture for high-throughput, highly parallel processing, ensuring that language tooling remains responsive even under heavy load.
We will continue to invest heavily in Langium and we are actively evolving its ecosystem and capabilities. Fastbelt is not a replacement for Langium, but rather a complementary tool that addresses a different class of problems where performance constraints become a primary concern. Together, they provide a broader range of options, so you can choose the most suitable approach for your specific needs.
Our recommendation for Xtext is more specific. It remains a robust and dependable option if your environment is centered around Java, or if you depend on tight integration with EMF-based tools and existing Eclipse ecosystems. In these contexts, Xtext plays to its strengths. However, newer projects will typically benefit from the flexibility of Langium or the performance focus of Fastbelt, depending on their requirements.
In rare cases, it is best to skip a full-fledged language toolkit altogether. We’ve seen this in scenarios where the language relies heavily on preprocessing or other text-level transformations. In such situations, building directly on a parser library—or even implementing a custom parser—can lead to a simpler and more efficient solution.
Features and roadmap
Fastbelt is already capable enough to handle most simple languages. Here is a breakdown of what is available today.
Grammar language. Fastbelt comes with a declarative grammar language that will feel immediately familiar to anyone who has worked with Xtext or Langium. You define your language’s syntax and structure in a single grammar file, from which all further artifacts are derived. The grammar language is completely bootstrapped and powered by Fastbelt itself.
Code generation. From your grammar, Fastbelt’s generator produces the lexer, parser, and cross-reference linking code needed to process files in your language.
Cross-reference resolution. Cross-references between language elements are resolved out of the box, with no manual wiring required. Fastbelt handles the indexing and linking across files in the workspace, which is a prerequisite for most non-trivial language features. Specialized behavior can be achieved by writing custom scoping logic.
LSP integration. Fastbelt includes an initial Language Server Protocol integration, covering a set of foundational requests including Go to Definition and Find References. This is enough to get a functional editor experience up and running, and forms the base on which richer LSP support will be built later.
Up next
Right now, we’re building up the framework—a lot of features are still missing and things might not behave as expected. These are some of the features we will work on in the upcoming weeks and months:
Parser Error Recovery. Since Fastbelt comes with its own parser generator, we really had to start from scratch. Currently, parser errors will stop the parser and report back with an incomplete AST. In the future, we want Fastbelt to be able to recover gracefully from parsing errors with appropriate error messages.
Unbounded Lookahead. Similar to the initial version of Langium, Fastbelt features an LL(k) parser. We plan to implement LL(*) to enable unbounded lookahead scenarios, which increases the range of possible grammars and languages Fastbelt can support.
Profiling. To keep the promise of the fastest language tooling around, we want adopters of Fastbelt to be well equipped to find bottlenecks and performance issues. For this, we need proper profiling tools built into the framework.
Language Testing. Building comprehensive testing suites when developing a language is difficult. That’s why we want to provide utilities for testing any language.
Project Scaffolding. For Langium, we used Yeoman to provide simple project scaffolding. We want to do something similar using the Fastbelt CLI.
More LSP features. Fastbelt supports just a few commonly used language server protocol features out-of-the-box. We want to add more soon, such as code completion, outline information and semantic highlighting.
See you at OCX!
If you’d like to go deeper, we’ll be sharing more at Open Community Experience (OCX), the Eclipse Foundation’s flagship conference.
In the session “Learning from 10 Years of Building Programming Languages” on April 22, 2026 (15:00–15:45, Studio 2), Mark will walk through the lessons behind Fastbelt and what they mean for the future of language tooling.
Fastbelt is still in its early days, but the direction is clear. Our goal is to make it production-ready within this year, providing a solid option for performance-critical language projects. Follow its GitHub repository for updates on the development.
Curious where this could take your tooling? Let’s talk.
About the Authors
Mark Sujew
Mark is the driving force behind a lot of TypeFox’s open-source engagement. He leads the development of the Eclipse Langium and Theia IDE projects. Away from his day job, he enjoys bartending and music, is an avid Dungeons & Dragons player, and works as a computer science lecturer at a Hamburg University.
Dr. Miro Spönemann
Miro joined TypeFox as a software engineer right after the company was established. Five years later he stepped up as a co-leader and is now eager to shape the future direction and strategy. Miro earned a PhD (Dr.-Ing.) at the University of Kiel and is constantly pursuing innovation about engineering tools.


