I'm definitely excited to see 64 bit as a default part of the spec. A lot of web apps have been heavily restricted by this, in particular any online video editors. We see a bunch of restrictions due to the 32 bit cap today here at Figma. One thing I'm curious though is whether mobile devices will keep their addressable per-tab memory cap the same. It's often OS defined rather than tied to the 32 bit space.
Unfortunately, Memory64 comes with a significant performance penalty because the wasm runtime has to check bounds (which wasn't necessary on 32-bit as the runtime would simply allocate the full 4GB of address space every time).
But if you really need more than 4GB of memory, then sure, go ahead and use it.
I still don't understand why it's slower to mask to 33 or 34 bit rather than 32. It's all running on 64-bit in the end isn't it? What's so special about 32?
Actually, runtimes often allocate 8GB of address space because WASM has a [base32 + index32] address mode where the effective address could overflow into the 33rd bit.
On x86-64, the start of the linear memory is typically put into one of the two remaining segment registers: GS or FS. Then the code can simply use an address mode such as "GS:[RAX + RCX]" without any additional instructions for addition or bounds-checking.
They might have meant lack of true 64bit pointers ..? IIRC the chrome wasm runtime used tagged pointers. That comes with an access cost of having to mask off the top bits. I always assumed that was the reason for the 32bit specification in v1
Seriously though, I’ve been wondering for a while whether I could build a GCC for x86-64 that would have 32-bit (low 4G) pointers (and no REX prefixes) by default and full 64-bit ones with __far or something. (In this episode of Everything Old Is New Again: the Very Large Memory API[1] from Windows NT for Alpha.)
Unfortunately the obvious `__attribute__((mode(...)))` errors out if anything but the standard pointer-size mode (usually SI or DI) is passed.
Or you may be able to do it based on x32, since your far pointers are likely rare enough that you can do them manually. Especially in C++. I'm pretty sure you can just call "foreign" syscalls if you do it carefully.
Especially how you could increase the segment value by one or the offset by 16 and you would address the same memory location. Think of the possibilities!
And if you wanted more than 1MB you could just switch memory banks[1] to get access to a different part of memory. Later there was a newfangled alternative[2] where you called some interrupt to swap things around but it wasn't as cool. Though it did allow access to more memory so there was that.
Then virtual mode came along and it's all been downhill from there.
It looks like memories have to be declared up front, and the memcpy instruction takes the memories to copy between as numeric literals. So I guess you can't use it to allocate dynamic buffers. But maybe you could decide memory 0 = heap and memory 1 = pixel data or something like that?
I know you're in this for the satire, but it's less about the webapps needing the memory and more about the content - that's why I mentioned video editing webapps.
For video editing, 4GiB of completely uncompressed 1080p video in memory is only 86 frames, or about 3-4 seconds of video. You can certainly optimize this, and it's rare to handle fully uncompressed video, but there are situations where you do need to buffer this into memory. It's why most modern video editing machines are sold with 64-128GB of memory.
In the case of Figma, we have files with over a million layers. If each layer takes 4kb of memory, we're suddenly at the limit even if the webapp is infinitely optimal.
Apparently with 24 bytes per pixel instead of bits :)
Although to be fair, there's HDR+ and DV, so probably 4(RGBA/YUVA) floats per pixel, which is pretty close..
I assume that looking into the present we need to think about running local LLMs in the browser. Just a few days ago I submitted an article about that [1].
> Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it.
It's very refreshing and good to see WASM is embracing GC in addition to non-GC support. This approach is similar to D language where both non-GC and GC are supported with fast compilation and execution.
By the way now you can generate WASM via Dlang compiler LDC [1].
When is WASM finally going to be able to touch the DOM? It feels like that was the whole point of WASM and instead its become a monster of its own that barely has anything to do with web anymore. When can we finally kill JavaScript?
Would be great for high performance web applications and for contexts like browser extensions where the memory usage and performance drain is real when multiplied over n open tabs. I'm not sure how code splitting would work in the wasm world, however.
v8 could be optimized to reduce its memory footprint if it detects that no JavaScript is running - or wasm-only applications could use an engine like wasmer and bypass v8 entirely.
Another factor is that web technologies are used to write desktop applications via Electron/similar. This is probably because desktop APIs are terrible and not portable. First class wasm support in the web would translate to more efficient desktop applications (Slack, VSCode, Discord, etc) and perhaps less hate towards memory heavy electron applications.
You can write a WASM program today that touches the DOM, it just needs to go through the regular JS APIs. While there were some discussions early on about making custom APIs for WASM to access, that has long since been dropped - there are just too many downsides.
But then you need two things instead of one. It should be made possible to build WASM-only SPAs. The north star of browser developers should be to deprecate JS runtimes the same way they did Flash.
You can use a framework that abstracts all the WASM to JS communication for DOM access. There are many such framework already.
The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.
It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.
That is never going to happen until you create your own browser with a fork of the WASM spec. People have been asking for this for about a decade. The WASM team knows this but WASM wants to focus on its mission of being a universal compile target without distraction of the completely unrelated mission of being a JavaScript replacement.
I agree with the first part, but getting rid of JS entirely means that if you want to augment some HTML with one line of javascript you have to build a WASM binary to do it?
I see good use cases for building entirely in html/JS and also building entirely in WASM.
For starters, the DOM API is huge and expansive. Simply giving WASM the DOM means you are greatly expanding what the sandbox can do. That means lower friction when writing WASM with much much higher security risks.
But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.
And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.
I was under the impression that this very much still on the table, with active work like the component model laying the foundation for the ABI to come.
One of the things that I think make this tricky is that if you have any DOM references you now have visibility into a GCable object.
Part of the web Javascript security model is that you cannot see into garbage collection. So if you have some WASM-y pointer to a DOM element, how do you handle that?
I think with GC in properly people might come at this problem again, but pre-GC WASM this sounds pretty intractable
I am watching patiently from a distance to my hands on a well-designed frontend language but can't help to wonder... is it really _that_ inefficient to call a JS wrapper to touch the DOM?
Most code already is so horribly inefficient that I can't imagine this making a noticeable difference in most scenarios.
I haven't really been following WASM development in the last year and didn't realize that WASM had moved to a versioned release model. I've been aware of the various features in development[1] and had thought many of the newer features were going to remain optional but I guess now that implementations are expected to support all the features to be able to claim compatibility with e.g. "WASM 3.0"?
It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; I'm not counting Deno since it builds on v8). Garbage collection seems like a pretty tricky feature in particular.
Does anyone know how this 3.0 release fits into the previously announced "evergreen" release model?[2]
> With the advent of 2.0, the Working Group is switching to a so-called “evergreen” model for future releases. That means that the Candidate Recommendation will be updated in place when we create new versions of the language, without ever technically moving it to the final Recommendation state. For all intents and purposes, the latest Candidate Recommendation Draft[3] is considered to be the current standard, representing the consensus of the Community Group and Working Group.
> It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; ...)
Wasmtime already supports every major feature in the Wasm 3.0 release, I believe. Of the big ones: garbage collection was implemented by my colleague Nick Fitzgerald a few years ago; tail calls by Jamey Sharp and Trevor Elliott last year (with full generality, any signature to any signature, no trampolines required!); and I built our exceptions support which merged last month and is about to go out in Wasmtime 37 in 3 days.
The "3.0" release of the Wasm spec is meant to show progress and provide a shorthand for a level of features, I think, but the individual proposals have been in progress for a long time so all the engine maintainers have known about them, given their feedback, and built their implementations for the most part already.
(Obligatory: I'm a core maintainer of Wasmtime and its compiler Cranelift)
Wizard supports all of Wasm 3.0, but as a research tool, it only has an interpreter and baseline compiler tier (no opt compiler), so it doesn't run as fast as, say, V8 or wasmtime.
I suspect the versioning is going to replicate the JavaScript version system where versions are just sets of features that a runtime can support or not, I am not sure how feature discovery works in wasm though
The WebAssembly community should really focus more the developer experience of using it. I recently completed a project where I wrote a compiler¹ targeting it and found the experience to be rather frustrating.
Given that Wasm is designed with formal semantics in mind, why is the DX of using it as a target so bad? I used binaryen.js to emit Wasm in my compiler and didn't get a feeling that I am targeting a well designed instruction set. Maybe this is a criticism of Binaryen and its poor documentation because I liked writing short snippets of Wasm text very much.
Binaryen has a lot of baggage from Wasm early days, when it was still a strict AST. Many newer features are difficult to manipulate in its model.
In our compiler (featured in TFA), we chose to define our own data structure for an abstract representation of Wasm. We then wrote two emitters: one to .wasm (the default, for speed), and one to .wat (to debug our compiler when we get it wrong). It was pretty straightforward, so I think the instruction set is quite nice. [1]
For what it's worth, I also tried Binaryen from TypeScript and similarly found it frustrating. I switched to using wasm-tools from Rust instead, and have found that to be a vastly better experience.
Isn't wasm-tools for working with Wasm modules? Maybe I'm missing something. I was using Binaryen to compile an AST to WebAssembly text. Also worth mentioning that Binaryen is the official compiler/toolchain for this purpose which is why I expected more from it.
Currently you use Binaryen to build up a representation of a Wasm module, then call emitText to generate a .wat file from that. With wasm-tools you'd do the same thing via the wasm-encoder crate to generate the bytes corresponding to a .wasm file, and then use the wasmprinter crate to convert from the .wasm format to the .wat format. Alternatively, I believe the walrus crate gives you a somewhat higher-level API to do the same thing, but I haven't used it because it's much heavier-weight.
What were your specific pain points? One thing that can be annoying is validation errors. That's one of the reasons that Wizard has a --trace-validation flag that prints a nicely-formatted depiction of the validation algorithm as it works.
Validation errors were bit of an issue. Especially because Binaryen constructs an internal IR that remains opaque until we emit the Wasm text. I did consider Wizard for my project but settled on Wasmtime because I needed WASI support.
My major pain point was the documentation. The binaryen.js API reference¹ is a list of function signatures. Maybe this makes sense to someone more experienced but I found it hard to understand initially. There are no explanation of what the parameters mean. For example, the following is the only information the reference provides for compiling an `if` statement:
In contrast, the Wasm instruction reference on MDN² is amazing. WASI again suffers from the same documentation issues. I didn't find any official resource on how to use `fd_write` for example. Thankfully I found this blog post³.
Wasm feels more inaccessible that other projects. The everyday programmer shouldn't be expected to understand PL research topics when they are trying to do something with it. I understand that's not the intention but this is what it feels like.
I've tried using binaryen, and I've also tried emitting raw wasm by hand, and the latter was far easier. It only took ~200 lines of wasm-specific code.
i found assembly is easier to assemble from scratch (it's apple and orange but.). Most materials should exclude these tooling, mostly rust based tools. We should be able to write them by hands just like when assembly was taught. Compiler and assembly are separate classes. I think it's bad assumption that only compiler devs only care about wasm. It's compiling target, sure, but framing it this way will not broaden its knowledge.
I'm a simple man who has simple needs. I want a better and faster way to pass Go structs in and out of the runtime that doesn't mean I have to do a sword dance on a parquet floor wearing thick knit wool socks and use some fragile grafted on solution.
If there can be a solution that works for more languages: great. I mostly want this for Go. If it means there will be some _reasonable_ limitations, that's also fine.
This is the truth, and it's not really much better in non-GCed languages either. (In reality my impression is the GCed wasm side runtimes are even worse).
Some of the least fun JavaScript I have ever written involved manually cleaning up pointers that in C++ would be caught by destructors triggering when the variable falls out of scope. It was enough that my recollections of JNI were more tolerable. (Including for go, on Android, curiously).
Then once you get through it you discover there is some serious per-call overhead, so those structs start growing and growing to avoid as many calls as possible.
I too want wasm to be decent, but to date it is just annoying.
You're doing native code, this the solution is the same as in native code: your languages agree on a representation, normally C's, or you serialize and deserialize.
Mixing language runtimes is just not a nice situation to deal with without the languages having first class support for it, and it should be obvious why.
I am not sure what you actually want but it sounds like something where the component model (the backbone of WASI) might help.
It defines a framework to allow modules to communicate with structured data types by allowing each module to decide how to map it to and from its linear memory (and in future the runtime GC heap)
In your case you could be able to define WIT interfaces for your go types and have your compiler of choice use it to generate all the relevant glue code
Since it hasn't been mentioned here yet: I wonder if the multiple-memories feature will somehow allow to avoid the extra copy that's currently needed when mapping a WebGPU resource. This mapping is available in a separate ArrayBuffer object which isn't accessible from WASM without calling into JS and then copying from the ArrayBuffer into the WASM heap and back.
Multiple WASM memories and Clang's/LLVM's address space feature sound like they should be able to solve that problem, but I'm not sure if it is as trivial as it sounds...
There has been a discussion (https://github.com/WebAssembly/multi-memory/issues/45) on the toolchain support, but I'm not sure if there have been steps to use multiple address spaces to support Wasm multi-memory in LLVM yet.
I'm just getting horrible segmenting and far-pointer vibes of the whole thing, been coding a classic Gameboy game for fun so fiddling with memory mappings is part of the "fun" but for anything non-constrained I'd hate that.
We buried far pointers with DOS and Win16 for a good reason..
I'm still hype about WASM. This looks like a cool release. I'm running some pretty high traffic WASM plugins on envoy, running some plugins for terminal apps (zellij), and for one of my toy side projects, I'm running a wasm web app (rust leptos).
For 2 of those 3 use cases, i think it's not technically the optimal choice, but i think that future may actually come. Congratulations and nice work to everyone involved!
Does WASM still have 64 KiB pages? I get why for desktops, but there are use-cases for running WASM on microcontrollers where that's either inconvenient or outright impossible.
The one in particular I have in mind would be to put WASM on graphical calculators, in order to have a more secure alternative to the ASM programs (it's possible nowadays to write in higher-level languages, but the term stuck) that could work across manufacturers. Mid-range has RAM on the order of 256 KiB, but a 32-bit core clocked at more than 200 MHz, so there's plenty of CPU throughput but not a lot of memory to work with.
Sadly, the closest thing there is for that is MicroPython. It's good for what it does, but its performance and capabilities are nowhere near native.
https://github.com/WebAssembly/custom-page-sizes is a proposal championed by my colleague to add single byte granularity to Wasm page sizes, motivated by embedded systems and many other use cases where 64kb is excessive. It is implemented in wasmtime, and a Firefox implementation is in progress.
It's not about the whole microcontroller having less than 64kB of memory - it's that each WASM module has a minimum memory size of 64kB, regardless of how much it actually requires. Also, if you need 65kB of memory, you now have to reserve 2 pages, meaning your app now needs 128kB of memory!
We're working on WASM for embedded over at atym.io if you're interested.
A runtime that accepts Wasm modules that use a large fraction of the functionality, there is going to be a RAM requirement in the few KiB to few tens of KiB. There seems to be a branch or fork of Wasm3 for Arduino (https://github.com/wasm3/wasm3-arduino).
If you are willing to do, e.g. Wasm -> AVR AOT compilation, then the runtime can be quite small. That basically implies that compilation does not happen on device, but at deployment time.
Exam mode, or test mode. It's something that appeared about ten years ago, to ensure that a graphical calculator isn't loaded with cheats or has certain features enabled. The technical reason is that the RESET button no longer clears all of the calculator's memory (think Flash, not RAM) and proctors like to see a flashing LED that tells them everything's fine.
It's a flawed idea and has led to an arms race, where manufacturers lock down their models and jailbreaks break them open. Even NumWorks, who originally had a calculator that was completely unprotected and used to publish all of their source code on GitHub, had to give in and introduce a proprietary kernel and code signing, in order to stop custom firmwares and applications from accessing the LED and stop countries from outlawing their calculators.
Indeed. I got bit by the programming bug writing utility programs in TI-BASIC on my TI-83. I would've had a very different life trajectory had I not been able to do that.
Unless I'm mistaken, it's been on life support for the past 15 years. It's probably more heavyweight and firmware size/Flash usage is a concern. I don't think performance would be on par with WASM and there are use-cases where that really matters (ray tracing rendering for example). I'm also not sure there are many maintained, open-source implementations for it out there. I've also heard stories that it was quite a mess in practice because it was plagued by bugs and quirks specific to phone models, despite the fact that it was supposed to be a standard.
I'd gladly be proven wrong, but I don't think Java ME has a bright future. Unless you were thinking of something else?
> Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm.
There's already a lot misunderstandings about wasm, and I fear that people will just go "It supports GC, so we can just export python/java/c#/go etc."
This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
Relying on the GC features of WASM will require writing code centered around the abstractions for the compiler that generates WASM.
As I understand it, WASM GC provides a number of low level primitives that are managed by the WASM host runtime, which would theoretically allow languages like Go or Python to slim down how much of their own language runtime needs to be packaged into the WASM module.
But how those languages still need to carry around some runtime of their own, and I don't think it's obvious how much a given language will benefit.
>But how those languages still need to carry around some runtime of their own
Also just there will be a special version of those language runtimes which probably won't be supported in 10 years time. Just like a lot of languages no longer have up to date versions that can run on the common language runtime.
Programming languages with type erasure would have no runtime, just raw program code and the WASM GC. Languages that have runtime types still need a runtime for that functionality.
The Kotlin wasm compiler was basically engineered on top of wasm's GC support. Works fairly OK. As far as I understand it's essentially the same garbage collector that is also used for regular javascript.
> This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
I don't think that's necessarily true anymore. But as you say, it depends on the compiler you use and how well it utilizes what is there. Jetbrains has big plans with Kotlin and Wasm with e.g. compose multiplatform already supporting it (in addition to IOS native and Android).
Wasm-GC are abstractions for compiler writers to enable GC dependent languages to run without shipping a GC to run inside the already GC'd browser/Wasm heap and instead just use the browser GC directly.
So yes, Java,C#,etc will work better (If you look at the horrible mess the current C# WASM export generates it basically ships with an inner platform containing a GC), and no, it will explicitly not speak with "javascript" objects (you can keep references to JS objects, but you cannot call JS methods directly).
This isn’t true at all in Dart for example which is a WASM-GC language. Literally one of the very main selling points of Dart is you write your code once and it runs anywhere, WASM is just another compile target like x64 or RISC-V or iOS.
Direct DOM access doesn't make any sense as a WASM feature.
It would be at best a web-browser feature which browser vendors need to implement outside of WASM (by defining a standardized C-API which maps to the DOM JS API and exposing that C API directly to WASM via the function import table - but that idea is exactly as horrible in practice as it sounds in theory).
If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too. Just make sure that the JS code has enough 'meat', e.g. don't call accross the WASM/JS boundary for every single DOM method call or property change. While the call itself is fast, the string conversion from the source language specific string representation on the WASM heap into JS strings and back is not free (getting rid of this string marshalling would be the only theoretical advantage of a 'native' WASM DOM API).
WASM is an abbreviation for WebAssembly. If it doesn't have DOM access, WebAssembly is as related to the Web as JavaScript is to Java. A language ecosystem with no I/O capability is as much use as a one-legged man at an arse-kicking party.
Well, arguably the worst thing about WASM is the naming.
It's neither directly related to the web, nor is it an assembly syntax.
It's just another virtual ISA. "Direct DOM access for WASM" makes about as much sense as "direct C++ stdlib access for the x86 instruction set" - none ;)
Oh wow, that really is terrible naming... I always thought WASM was a specification for compiling code into something that runs natively in web browsers—like a web-specific compilation target.. Today I learned.
If you want to compare the situation to x86, direct DOM access for WebAssembly is more akin to the BIOS than C++ stdlib access. If it can't interact with the outside world, it's just a very special toy that you can only use to play a game that isn't any fun, and a good candidate for those 'What's the next COBOL?' discussions that come up every now and then.
...in WASM you also call a function to do IO though? That function is just provided by the host environment via the function import table, but conceptually it's the exact same thing as a Linux syscall, a BIOS int-call or calling into a Windows system DLL.
Isn’t the whole reason why people want DOM access is so that the JavaScript side doesn’t have any meat to it and they can write their entire web app in Rust/Go/Swift/etc compiled to webasm without performance concerns?
The bottleneck is in the DOM operations themselves, not javascript. This is the reason virtual-dom approaches exist: it is faster to operate on an intermediate representation in JS than the DOM itself, where even reading an attribute might be costly.
This isn't true. DOM access is fast. DOM manipulation is also fast. The issue is many DOM manipulations happening all at once constantly that trigger redraws. Redrawing the DOM can also be fast if the particular DOM being redrawn is relatively small.
React was created because Facebook's DOM was enormous. And they wanted to constantly redraw the screen on every single interaction. So manipulating multiple elements simultaneously caused their rendering to be slow. So they basically found a way to package them all into a single redraw, making it seem faster.
WASM isn't going to magically make the DOM go faster. DOM will still be just as slow as it is with Javascript driving it.
WASM is great for heavy-lifting, like implementing FFMPEG in the browser. DOM is still going to be something people (questionably) complain about even if WASM had direct access to it. And WASM isn't only used in the browser, it's also running back-end workloads too where there is no DOM, so a lot of use cases for WASM are already not using DOM at all.
It's not a WASM feature, but would be a web browser feature outside the WASM standard.
E.g. the "DOM peeps" would need to make it happen, not the "WASM peeps".
But that would be a massive undertaking for minimal benefit. There's much lower hanging fruit in the web-API world to fix (like for instance finally building a proper audio streaming API, because WebAudio is a frigging clusterf*ck, and if any web API would benefit from an even minimal reduction of JS <=> WASM marshalling overhead it would be WebGL2 and WebGPU, not the DOM. But even for WebGL2 and WebGPU the cost inside the browser implementation of those APIs is much higher than the WASM <=> JS marshalling overhead.
> If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too.
From the point of view of someone who doesn't do web development at all, and to whom JS seems entirely cryptic: This argument is weird. Why is this specific (seemingly extremely useful!) "web thing" guarded by a specific language? Why would something with the generality and wide scope of WASM relegate that specific useful thing to a particular language? A language that, in the context of what WASM wants to do in making the web "just another platform", is pretty niche (for any non-web-person)?
For me, as a non-web-person, the big allure of WASM is the browser as "just another platform". The one web-specific thing that seems sensible to keep is the DOM. But if manipulating that requires learning web-specific languages, then so be it, I'll just grab a canvas and paint everything myself. I think we give up something if we start going that route.
Think of it as traditional FFI (foreign function interface) situation.
Many important libraries have been written in C and only come with a C API. To use those libraries in non-C languages (such as Java) you need a mechanism to call from Java into C APIs, and most non-C language have that feature (e.g. for Java this was called JNI but has now been replaced by this: https://docs.oracle.com/en/java/javase/21/core/foreign-funct...), e.g. C APIs are a sort of lingua franca of the computing world.
The DOM is the same thing as those C libraries, an important library that's only available with an API for a single language, but this language is Javascript instead of C.
To use such a JS library API from a non-JS language you need an FFI mechanism quite similar to the C FFI that's been implemented in most native programming languages. Being able to call efficiently back and forth between WASM and JS is this FFI feature, but you need some minimal JS glue code for marshalling complex arguments between the WASM and JS side (but you also need to do that in native scenarios, for instance you can't directly pass a Java string into a C API).
Sure, but if someone came at the C-centric ecosystem today and said "let's do the work to make it so that any language can play in this world", then surely "just FFI through C" would be considered rather underwhelming?
In my opinion it's an overengineered boondoggle, since "C APIs ought to be good enough for anything", but maybe something useful will eventually come out of it, so far it looks like it mostly replaces the idea of C-APIs as lingua-franca with "a random collection of Rust stdlib types" as lingua-france, which at least to me sounds utterly uninteresting.
The practical argument is that while initially the DOM API was developed to be language agnostic with more of an eye to Java/C++ than JavaScript since a while ago this is no longer the case and many web APIs use JavaScript data types and interfaces (eg async iterators) that do not map well to wasm
The good news is that you can use very minimal glue code with just a few functions to do most JavaScript operations
You don't need to write Javascript to access the DOM. Such bindings still call JS under the hood of course to access the DOM API, but that's an implementation detail which isn't really important for the library user.
While technically possible - the calls to javascript slow things down and you're never going to get the performance of just writing javascript in the first place, much less the performance of skipping javascript altogether.
The calls to JS are quite cheap, when trusting the diagrams in here it's about 10 clock cycles on a 2 GHz CPU per call (e.g. 200 million calls per second):
The only thing that might be expensive is translating string data from the language-specific string representation on the WASM heap into the JS string objects expected by the DOM API. But this same problem would need to be solved in a language-portable way for any native WASM-DOM-API, because WASM has no concept of a 'string' and languages have different opinions about what a string looks like in memory.
But even then, the DOM is an inherently slow API starting with the string-heavy API design, the bit of overhead in the JS shim won't suddenly turn the DOM into a lightweight and fast rendering system.
E.g. it's a bit absurd to talk about performance and the DOM in the same sentence IMHO ;)
Dart also has this and as you can see in the examples in the README the APIs look exactly the same as what you are used to in JavaScript but now are fully typed and your code compiles to WASM.
_Telling the browser how you want the DOM manipulated_ isn't the expensive part. You can do this just fine with Javascript. The browser _actually redrawing after applying the DOM changes_ is the expensive part and won't be any cheaper if the signal originated from WASM.
Wasm doesn't specify any I/O facilities at all. DOM is no different. There's a strict host/guest boundary and anything interacting with the outside world enters Wasm through an import granted by the host. On the web, the host is the JS runtime.
Don't sleep on the Rust toolchain for this! You can have DOM-via-Wasm today, the tools generate all the glue for you and the overhead isn't that bad, either.
Got a rec? The reply to you is talking about a component framework, rather than actual vanilla html/css access. I haven't seen anything, personally, that allows real-time, direct DOM interaction.
Wasm 3.0, with its GC and exception support, contains everything you need. The rest is up to the source language to deal with. For example, in Scala.js [1], which is mentioned in the article, you can use the full extent of JavaScript interop to call DOM methods right from inside your Scala code. The compiler does the rest, transparently bridging what needs to be.
I wish the same mate, Please wasm team, I am more than happy waiting 3 years if you can guarantee that you are looking into the best way possible into integrating this feature of dom manipulation.
I sometimes feel like js is too magic-y, I want plain boring golang and want to write some dom functions without using htmx preferably.
Please give us more freedom! This might be the most requested feature and this was how I came across knowing wasm in the first place (leptos video from some youtuber I think, sorry if i forgot)
I was even trying to be charitable and read the feature list for elements that would thin down a third party DOM access layer, but other than the string changes I’m just not seeing it. That’s not enough forward progress.
WASM is just an extremely expensive toy for browsers until it supports DOM access.
It's a chicken egg situation. The people already using WASM either don't care about the DOM or had realized long ago that going through a JS shim works just as well, the rest just complain time and time again that WASM has no DOM access whenever there's a HN thread about WASM, but usually don't even use WASM for anything.
Especially if there was major momentum of people writing their web applications with wasm, there would be a reason to eventually get that massive undertaking of creating the ABI for that working. Then all those applications could just recompile to make use of this new hypothetically faster API. The bigger issue here is that it just doesn't make any sense to write frontend code in rust or go or whatever in the first place.
The whole js ecosystem evolved to become a damn good environment to write UIs with, people don't know the massive complexity this environment evolved to solve over decades.
My old team shipped a web port of our 3D modeling software back in 2017. The entire engine is the same as the desktop app, written in C++, and compiled to wasm.
Wasm is not now and will never be a magic "press here to replace JS with a new language" button. But it works really well for bringing systems software into a web environment.
It's explicitly negated from the Wasm-GC spec, too damn much security issue surface that keeps all of the browser makers solidly in the "do not want to touch" camp.
Is there a technical reason for the web limit to be 16 GB specifically? Or is it just a round number picked so that the limit could be standardized? Also, has the limit on JS heap size (and ArrayBuffer size) also been relaxed to 16 GB or is it still lower?
There's comments in there about waiting for a polyfill, but GC support is widespread enough that they should probably just drop support for non-GC runtimes in a major version.
I'm not familiar with all the implementation details of objects in C#, but the list of issues mixes runtime implementation details (object layouts) that should be fairly low effort to work around with actual language/runtime features (references, finalization).
In general though most regular C# code written today _doesn't directly_ use many of the features mentioned apart from references. Libraries and bindings however do so a lot since f.ex. p/invoke isn't half as braindead as JNI was, but targeting the web should really not bring along all these libraries anyhow.
So, making a MSIL runtime that handles most common C# code would map pretty much 1-1 with Wasm-GC, some features like ref's might need some extra shims to emulate behaviour (or compiler specializations to avoid too bad performance penalties by extra object creation).
Regardless of what penalties,etc goes in, the generated code should be able to be far smaller and far less costly compared to the situation today since they won't have to ship both their own GC and implement everything around that.
Part of the problem is you would need to fork the base class libraries and many popular nuget packages to remove any uses of ref/in/out, along with any interior references, spans, etc. The .NET type system has allowed 'interior references' (references into the inside of a GC object) for a long time and it's difficult to emulate those on top of WasmGC, especially if your goal is to do it at low cost.
It's definitely true that you could compile some subset of C# applications to WasmGC but the mismatch with the language as it's existed for a long time is painful.
No, the component model proposal is not part of the Wasm 3.0 release. Proposals only make it into a Wasm point release once they reach stage 5, and the component model is still under development and so is not trying to move through the phases yet.
Unlike any of the proposals which became part of Wasm 3.0, the component model does not make any changes to the core Wasm module encoding or its semantics. Instead, it’s designed as a new encoding container which contain core Wasm modules, and adds extra information alongside each module describing its interface types and how to instantiate and link those modules. By keeping all of these additions outside of core Wasm, we can build implementations out of any plain old Wasm engine, plus extra code that instantiates and links those modules, and converts between the core wasm ABI to higher level interface types. The Jco project https://github.com/bytecodealliance/jco does exactly that using the common JS interface used by every web engine’s Wasm implementation. So, we can ship the component model on the web without web engines putting in any work of their own, which isn’t possible with proposals which add or change core wasm.
The tail call instructions (return_call and friends) were crucial for compiling Scheme. Safari had a bug in their validator for these instructions but the fix shipped in their most recent release so now you can use Wasm tail calls to their fullest in all major browsers.
Has anyone benchmarked 64bit memory on the current implementations? There's the potential for performance regressions there because they could exploit the larger address space of 64bit hosts to completely elide bounds checks when running 32bit WASM code, but that doesn't work if the WASM address space is also 64bit.
> WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.
> This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.
Oof, that's unfortunate. I'm sure there's good reasons why WASM works like it does but the requirement for OOB to immediately abort the program seems rough for performance, as opposed to letting implementations handle it silently without branching (e.g. by masking the high bits of pointers so OOB wraps around).
Is this WASM specific though? Some apps suffer in performance when they move to 64-bit in general due to larger pointers and not taking sufficient advantage of/or needing 64-bit data types in general, hence the increased memory bandwidth/cache space slows them down (one of the reasons many people like a 32-bit address space, 64-bit data model).
The blog post explains that it's more than that. Bounds checking, in particular, costs more for reasons having to do with browser implementations, for example, rather than for architectural reasons.
One bright point here is that the WASM changes may force v8 to improve its IPC by having a feature that Bun gets from JSC, which is passing strings across isolate boundaries.
IPC overhead is so bad in NodeJS that most people don’t talk about it because the workarounds are just impossibly high maintenance. We reach straight for RPC instead, and downplay the stupidity of the entire situation. Kind of reminiscent of the Ruby community, which is perhaps not surprising given the pedigree of so many important node modules (written by ex Rails devs).
Wasm 3.0 looks like a significant step forward. The addition of 64-bit address space and improved reference typing really expands the platform’s capabilities. Integration with WASI makes access to system resources more predictable, but asynchronous operations and JS interop remain key pain points. Overall, this release makes Wasm more viable not just in the browser, but also for server-side and embedded use cases.
Wasm only gets additive changes - the binary format can't change in a way that breaks any previously existing programs, because that would break the Web. So, you just have to add more opcodes to your implementation.
It introduces new types (structs and arrays), a new section for tags, and several dozen instructions (first-class functions, GC, tail calls, and exception handling). It generalizes to multiple memories and tables, as well as adding 64-bit memories. The binary format changes aren't too bad, but it's a fairly big semantic addition.
The whole magic about CL's condition system is to keep on executing code in the context of a given condition instead of immediately unwinding the stack, and this can be done if you control code generation.
Everything else necessary, including dynamic variables, can be implemented on top of a sane enough language with dynamic memory management - see https://github.com/phoe/cafe-latte for a whole condition system implemented in Java. You could probably reimplement a lot of this in WASM, which now has a unwind-to-this-location primitive.
Not including GC would have been a mistake. Having to carry a complete garbage collector with every program, especially on platforms like browsers were excellent ones already exist, would have been a waste.
It's also important because sometimes you want a WebAssembly instance to hold a reference to a GC object from Javascript, such as a DOM object, or be able to return a similar GC object back to Javascript or to another separate WebAssembly instance. Doing the first part alone is easy to do with a little bit of JS code (make the JS code hold a reference to the GC object, give the Wasm an id that corresponds to it, and let the Wasm import some custom JS functions that can operate on that id), but it's not composable in a way that lets the rest of those tasks work in a general way.
Yes, every wasm program that uses linear memory (which includes all those created by llvm toolchains) must ship with its own allocator. You only get to use the wasm GC provided allocator if your program is using the gc types, which can’t be stored in a linear memory.
Yes, but Emscripten comes with a minimal allocator that's good enough for most C code (e.g. code with low alloc/free frequency) and only adds minimal size overhead:
how is that different from compiling against a traditional CPU which also doesn't have a built in GC? i mean those programs that need a GC already have one. so what is the benefit of including one on the "CPU"?
The fact that a minimum size go program is a few megabytes in size is acceptable in most places in 2025. If it was shipped over the wire for every run time instead of a single install time download, that would be a different story.
Garbage collection is a small part of the go run time, but it's not insignificant.
Skimming this issue, it seems like they weren't expecting to be able to use this GC. I know C# couldn't either, at least based on an earlier state of the proposal.
this thread confirms my suspicions. some languages may benefit from a built in GC, but those languages probably use a generic GC to begin with. wheras any language that has a highly optimized GC for their own needs won't be able to use this one.
The "CPU" in every browser already has one. This lets garbage-collected languages use that one. That's an enormous savings in code size and development effort.
i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
so most GC-languages being ported to webassembly already have a GC, so what is the benefit of using a provided GC then?
on the other hand i see GC as a feature that could become part of any modern CPU. then the benefit would be large, as any language could use it and wouldn't have to implement their own at all anymore.
Aside from code size the primary benefit on the Web is that the GC provided to wasm is the same one for the outer JavaScript engine, so an object from wasm can stay alive get collected based on whether JS keeps references to it. So it’s not really about providing a GC for a single wasm module (program), its about participating in one cooperatively with other programs.
Writing a GC that performs well often involves making decisions that are tightly coupled to the processor architecture and operating system as well as the language implementation's memory representations for objects. Using a GC that is already present can solve that problem.
> i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
I'd think porting an existing GC to WASM is more effort than using WASM's GC for a GC'd language?
i don't think so. first of all, you don't rewrite your code for every CPU but you just adapt some specific things. most code is just compiled for the new architecture and runs. second, those languages that are already running on wasm have already done the work. so at best new languages who haven't been ported yet will get any benefit from a reduced porting effort.
I think it's a "you don't pay for it if you don't use it" thing, so I guess it's fine. It won't affect me compiling my C or Zig code to WASM for instance since those languages have neither garbage collection nor exceptions.
It's kinda nice to have 1st class exception support. C++ exceptions barely work in Emscripten right now. Part of the problem is that you can't branch to arbitrary labels in WASM.
WASM isn't a language, so them adding stuff like this serves to increase performance and standardize rather than forcing compilers to emulate common functionality.
Besides making it much nicer for GC'd languages to target WASM, an important aspect is that it also allows cross-language GC.
Whereas with a manual GC, if you had a JS object holding a reference to an object on your custom heap, and your heap holds a reference to that JS object (with indirections sprinkled in to taste) but nothing else references it, that'd result in a permanent memory leak, as both heaps would have to consider everything held by the other as GC roots; so you'd still be forced to manually avoid cycles despite only ever using GC'd languages. Wasm GC entirely avoids this problem.
That isn't updated for Safari 26, but by that table Safari 18 is only missing 3 standardized features that Chrome supports, with a fourth that is disabled by default. So what's the point of your comment? Just to make noise and express your ignorance?
Historically speaking, apple has consistently limited web app functionality on iOS since 2008. I think we would be much further ahead if it wasn't for Apple’s policies under his leadership.
Apple took over the distribution to prioritize a cut to the app store which crippled/slowed the open web PWA and WASM adoption.
Sure, and that's why Asm.JS(regular JS with special semantics) and later Wasm(bytecode translateable to JS) was so brilliant. It already worked on Safari, they had the option of either:
A: look slow compared to other engines that supported it
B: implement it
Now, stuff like the exception handling stuff and tail calls probably aren't shimmable via JS, but at this point they don't gain much from being obstructionists.
What ignorance? Safari doesn't support the most important additions:
- memory64
- multiple memories
- JSPI (!!)
I recently explored the possibility of optimizing qemu-wasm in browser[0].. and it turns out that the most important features were those Safari doesn't implement.
As a _user_ JSPI looks neat, however as a compiler writer JSPI looks like an horrible hairball of security issues and performance gotchas to degrade generated WASM code performance.
Say you have a WASM module, straight line code that builds a stack, runs quickly because apart from overflow checks it can just truck on.
Now add this JS-Promise thing into the mix:
A: now how does a JS module handle the call into the Wasm module? classic WASM was a synchronous call, now should we change the signature of all wasm functions to async?
B: does the WASM internal calls magically become Promises and awaits (Gonna take a lot of performance out of WASM modules), if not we now have a 2 color function world that needs reconciliation.
C: If we do some magic, where the full frame is paused and stored away, what happens if another JS function then calls into the WASM module and awaits and then the first one resumes? Any stack inside the wasm-memory has now potential race conditions (and potentially security implications). Sure we could make locks on all Wasm entries but that could cause other unintended side-effects.
D: Even if all of the above are solved, there's still the low-level issues of lowlevel stack management for wasm compiled code.
Looking at the mess that is emscripten's current solution to this, I really hope that this proposal gets very well thought out and not just railroaded in because V8's compiler manages to support it because.
1: It has the potential to affect performance for all Wasm code just because people writing Qemu,etc are too lazy to properly abstract resource loading to cooperate with the Wasm model.
2: It can become a burden on the currently thriving Wasm ecosystem with multiple implementations (honestly, stuff like Wasm-GC is less disruptive even if it includes a GC).
JSPI-based coroutines are much faster than the old Asyncify ones (my demo shows that).
As for your core message - I'm just the user, but if Google engineers were able to implement that, then it is possible to implement that securely. I remember Google engineers arguing with Apple engineers in GH issues, but I'm not on that level, I just see that JSPI is already implemented in Chrome, so you can't tell me it's not possible.
Multiple memories and Memory64 just became part of the spec. And JSPI is still being standardized. Is Safari slower to roll out new things? Yes. But it's hardly stopping adoption. Chrome has 70% of the browser market, Safari barely has 15%.
But Apple doesn't allow using other browser engines on iOS, so this matters much more. I mean, they were forced to allow them, but of course they didn't actually comply, they created artificial barriers to ensure only Safari can be used on iOS.
For the mobile browser market, Chrome is still around 70%, Safari is a bit better off in mobile at 20% of global browser market share. That's still a minority platform. It's not inhibiting wasm feature adoption with those numbers.
I'm definitely excited to see 64 bit as a default part of the spec. A lot of web apps have been heavily restricted by this, in particular any online video editors. We see a bunch of restrictions due to the 32 bit cap today here at Figma. One thing I'm curious though is whether mobile devices will keep their addressable per-tab memory cap the same. It's often OS defined rather than tied to the 32 bit space.
Unfortunately, Memory64 comes with a significant performance penalty because the wasm runtime has to check bounds (which wasn't necessary on 32-bit as the runtime would simply allocate the full 4GB of address space every time).
But if you really need more than 4GB of memory, then sure, go ahead and use it.
I still don't understand why it's slower to mask to 33 or 34 bit rather than 32. It's all running on 64-bit in the end isn't it? What's so special about 32?
Actually, runtimes often allocate 8GB of address space because WASM has a [base32 + index32] address mode where the effective address could overflow into the 33rd bit.
On x86-64, the start of the linear memory is typically put into one of the two remaining segment registers: GS or FS. Then the code can simply use an address mode such as "GS:[RAX + RCX]" without any additional instructions for addition or bounds-checking.
The irony for me is that it's already slow because of the lack of native 64-bit math. I don't care about the memory space available nearly as much.
Eh? I'm pretty sure it's had 64-bit math for awhile -- i64.add, etc.
They might have meant lack of true 64bit pointers ..? IIRC the chrome wasm runtime used tagged pointers. That comes with an access cost of having to mask off the top bits. I always assumed that was the reason for the 32bit specification in v1
The comedy option would be to use the new multi-memory feature to juggle a bunch of 32bit memories instead of a 64bit one, at the cost of your sanity.
didn't we call it 'segmented memory' back in DOS days...?
We call it "pointer compression" now. :)
Seriously though, I’ve been wondering for a while whether I could build a GCC for x86-64 that would have 32-bit (low 4G) pointers (and no REX prefixes) by default and full 64-bit ones with __far or something. (In this episode of Everything Old Is New Again: the Very Large Memory API[1] from Windows NT for Alpha.)
[1] https://devblogs.microsoft.com/oldnewthing/20070801-00/?p=25...
A moderate fraction of the work is already done using:
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html
Unfortunately the obvious `__attribute__((mode(...)))` errors out if anything but the standard pointer-size mode (usually SI or DI) is passed.
Or you may be able to do it based on x32, since your far pointers are likely rare enough that you can do them manually. Especially in C++. I'm pretty sure you can just call "foreign" syscalls if you do it carefully.
It was glorious I tell you.
Especially how you could increase the segment value by one or the offset by 16 and you would address the same memory location. Think of the possibilities!
And if you wanted more than 1MB you could just switch memory banks[1] to get access to a different part of memory. Later there was a newfangled alternative[2] where you called some interrupt to swap things around but it wasn't as cool. Though it did allow access to more memory so there was that.
Then virtual mode came along and it's all been downhill from there.
[1]: https://en.wikipedia.org/wiki/Expanded_memory
[2]: https://hackaday.com/2025/05/15/remembering-more-memory-xms-...
wait.... UNREAL MODE!
It looks like memories have to be declared up front, and the memcpy instruction takes the memories to copy between as numeric literals. So I guess you can't use it to allocate dynamic buffers. But maybe you could decide memory 0 = heap and memory 1 = pixel data or something like that?
Honestly you could allocate a new memory for every page :-)
Webapps limited by 4GiB memory?
Sounds about right. Guess 512 GiB menory is the minimum to read email nowadays.
I know you're in this for the satire, but it's less about the webapps needing the memory and more about the content - that's why I mentioned video editing webapps.
For video editing, 4GiB of completely uncompressed 1080p video in memory is only 86 frames, or about 3-4 seconds of video. You can certainly optimize this, and it's rare to handle fully uncompressed video, but there are situations where you do need to buffer this into memory. It's why most modern video editing machines are sold with 64-128GB of memory.
In the case of Figma, we have files with over a million layers. If each layer takes 4kb of memory, we're suddenly at the limit even if the webapp is infinitely optimal.
> 4GiB of completely uncompressed 1080p video in memory is only 86 frames
How is that data stored?
Because (2^32)÷(1920×1080×4) = 518 which is still low but not 86 so I'm curious what I'm missing?
Apparently with 24 bytes per pixel instead of bits :) Although to be fair, there's HDR+ and DV, so probably 4(RGBA/YUVA) floats per pixel, which is pretty close..
I would guess 3 colour channels at 16bit (i.e. 2 bytes)
(2^32)÷(1920×1080×4×3×2) = 86
In fairness, this is talking about Figma, not an email client
It doesn't actually allocate 4 GiB. Memory can be mapped without being physically occupied.
No, web apps can actually use 4GB of memory (now 16GB apparently).
Finally a web browser capable of loading slack
I assume that looking into the present we need to think about running local LLMs in the browser. Just a few days ago I submitted an article about that [1].
[1] https://news.ycombinator.com/item?id=45200414
> Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it.
Wow!
It's very refreshing and good to see WASM is embracing GC in addition to non-GC support. This approach is similar to D language where both non-GC and GC are supported with fast compilation and execution.
By the way now you can generate WASM via Dlang compiler LDC [1].
[1] Generating WebAssembly with LDC:
https://wiki.dlang.org/Generating_WebAssembly_with_LDC
Does this allow for shrinking the WebAssembly.Memory object?
- https://github.com/WebAssembly/design/issues/1397
- https://github.com/WebAssembly/memory-control/issues/6
This is a crucial issue, as the released memory is still allocated by the browser.
When is WASM finally going to be able to touch the DOM? It feels like that was the whole point of WASM and instead its become a monster of its own that barely has anything to do with web anymore. When can we finally kill JavaScript?
Agreed. This and (sane) access to multi-threading. I want to be able to write a Rust application, compile to wasm and load it with
Would be great for high performance web applications and for contexts like browser extensions where the memory usage and performance drain is real when multiplied over n open tabs. I'm not sure how code splitting would work in the wasm world, however.v8 could be optimized to reduce its memory footprint if it detects that no JavaScript is running - or wasm-only applications could use an engine like wasmer and bypass v8 entirely.
Another factor is that web technologies are used to write desktop applications via Electron/similar. This is probably because desktop APIs are terrible and not portable. First class wasm support in the web would translate to more efficient desktop applications (Slack, VSCode, Discord, etc) and perhaps less hate towards memory heavy electron applications.
> <script type="application/wasm" src="./main.wasm"></script>
<applet code="./Main.class"></applet>
Plus ça change...
You can write a WASM program today that touches the DOM, it just needs to go through the regular JS APIs. While there were some discussions early on about making custom APIs for WASM to access, that has long since been dropped - there are just too many downsides.
But then you need two things instead of one. It should be made possible to build WASM-only SPAs. The north star of browser developers should be to deprecate JS runtimes the same way they did Flash.
You can use a framework that abstracts all the WASM to JS communication for DOM access. There are many such framework already.
The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.
It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.
Which framework is the best or most commonly used?
That is never going to happen until you create your own browser with a fork of the WASM spec. People have been asking for this for about a decade. The WASM team knows this but WASM wants to focus on its mission of being a universal compile target without distraction of the completely unrelated mission of being a JavaScript replacement.
I agree with the first part, but getting rid of JS entirely means that if you want to augment some HTML with one line of javascript you have to build a WASM binary to do it?
I see good use cases for building entirely in html/JS and also building entirely in WASM.
Could you list some of these downsides and what are the reason of their existence?
For starters, the DOM API is huge and expansive. Simply giving WASM the DOM means you are greatly expanding what the sandbox can do. That means lower friction when writing WASM with much much higher security risks.
But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.
And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.
I was under the impression that this very much still on the table, with active work like the component model laying the foundation for the ABI to come.
Isn't going through the JS APIs slow?
used to be, in the early days, but nowadays runtimes optimized the function call overhead between WASM and JS to near zero
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
One of the things that I think make this tricky is that if you have any DOM references you now have visibility into a GCable object.
Part of the web Javascript security model is that you cannot see into garbage collection. So if you have some WASM-y pointer to a DOM element, how do you handle that?
I think with GC in properly people might come at this problem again, but pre-GC WASM this sounds pretty intractable
> When can we finally kill JavaScript?
If you think JavaScript has problems I have bad news about the DOM…
I am watching patiently from a distance to my hands on a well-designed frontend language but can't help to wonder... is it really _that_ inefficient to call a JS wrapper to touch the DOM?
Most code already is so horribly inefficient that I can't imagine this making a noticeable difference in most scenarios.
Probably never. There's a pretty good recent thread on this topic:
https://news.ycombinator.com/item?id=44775801
I would bet on browsers being able to consume Typescript before WASM exposing any DOM API. That'd improve the Javascript situation a bit, at least.
I haven't really been following WASM development in the last year and didn't realize that WASM had moved to a versioned release model. I've been aware of the various features in development[1] and had thought many of the newer features were going to remain optional but I guess now that implementations are expected to support all the features to be able to claim compatibility with e.g. "WASM 3.0"?
It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; I'm not counting Deno since it builds on v8). Garbage collection seems like a pretty tricky feature in particular.
Does anyone know how this 3.0 release fits into the previously announced "evergreen" release model?[2]
> With the advent of 2.0, the Working Group is switching to a so-called “evergreen” model for future releases. That means that the Candidate Recommendation will be updated in place when we create new versions of the language, without ever technically moving it to the final Recommendation state. For all intents and purposes, the latest Candidate Recommendation Draft[3] is considered to be the current standard, representing the consensus of the Community Group and Working Group.
[1] https://webassembly.org/features/
[2] https://webassembly.org/news/2025-03-20-wasm-2.0/
[3] https://www.w3.org/TR/wasm-core-2/
> It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; ...)
Wasmtime already supports every major feature in the Wasm 3.0 release, I believe. Of the big ones: garbage collection was implemented by my colleague Nick Fitzgerald a few years ago; tail calls by Jamey Sharp and Trevor Elliott last year (with full generality, any signature to any signature, no trampolines required!); and I built our exceptions support which merged last month and is about to go out in Wasmtime 37 in 3 days.
The "3.0" release of the Wasm spec is meant to show progress and provide a shorthand for a level of features, I think, but the individual proposals have been in progress for a long time so all the engine maintainers have known about them, given their feedback, and built their implementations for the most part already.
(Obligatory: I'm a core maintainer of Wasmtime and its compiler Cranelift)
Wizard supports all of Wasm 3.0, but as a research tool, it only has an interpreter and baseline compiler tier (no opt compiler), so it doesn't run as fast as, say, V8 or wasmtime.
I suspect the versioning is going to replicate the JavaScript version system where versions are just sets of features that a runtime can support or not, I am not sure how feature discovery works in wasm though
The WebAssembly community should really focus more the developer experience of using it. I recently completed a project where I wrote a compiler¹ targeting it and found the experience to be rather frustrating.
Given that Wasm is designed with formal semantics in mind, why is the DX of using it as a target so bad? I used binaryen.js to emit Wasm in my compiler and didn't get a feeling that I am targeting a well designed instruction set. Maybe this is a criticism of Binaryen and its poor documentation because I liked writing short snippets of Wasm text very much.
1. https://git.sr.ht/~alabhyajindal/jasmine
Binaryen has a lot of baggage from Wasm early days, when it was still a strict AST. Many newer features are difficult to manipulate in its model.
In our compiler (featured in TFA), we chose to define our own data structure for an abstract representation of Wasm. We then wrote two emitters: one to .wasm (the default, for speed), and one to .wat (to debug our compiler when we get it wrong). It was pretty straightforward, so I think the instruction set is quite nice. [1]
[1] https://github.com/scala-js/scala-js/tree/main/linker/shared...
For what it's worth, I also tried Binaryen from TypeScript and similarly found it frustrating. I switched to using wasm-tools from Rust instead, and have found that to be a vastly better experience.
Isn't wasm-tools for working with Wasm modules? Maybe I'm missing something. I was using Binaryen to compile an AST to WebAssembly text. Also worth mentioning that Binaryen is the official compiler/toolchain for this purpose which is why I expected more from it.
Currently you use Binaryen to build up a representation of a Wasm module, then call emitText to generate a .wat file from that. With wasm-tools you'd do the same thing via the wasm-encoder crate to generate the bytes corresponding to a .wasm file, and then use the wasmprinter crate to convert from the .wasm format to the .wat format. Alternatively, I believe the walrus crate gives you a somewhat higher-level API to do the same thing, but I haven't used it because it's much heavier-weight.
What were your specific pain points? One thing that can be annoying is validation errors. That's one of the reasons that Wizard has a --trace-validation flag that prints a nicely-formatted depiction of the validation algorithm as it works.
Validation errors were bit of an issue. Especially because Binaryen constructs an internal IR that remains opaque until we emit the Wasm text. I did consider Wizard for my project but settled on Wasmtime because I needed WASI support.
My major pain point was the documentation. The binaryen.js API reference¹ is a list of function signatures. Maybe this makes sense to someone more experienced but I found it hard to understand initially. There are no explanation of what the parameters mean. For example, the following is the only information the reference provides for compiling an `if` statement:
In contrast, the Wasm instruction reference on MDN² is amazing. WASI again suffers from the same documentation issues. I didn't find any official resource on how to use `fd_write` for example. Thankfully I found this blog post³.Wasm feels more inaccessible that other projects. The everyday programmer shouldn't be expected to understand PL research topics when they are trying to do something with it. I understand that's not the intention but this is what it feels like.
1. https://github.com/WebAssembly/binaryen/wiki/binaryen.js-API
2. https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
3. https://tty4.dev/development/wasi-load-fd-write/
Thanks for bringing Wizard to my attention, the next time I need to validate wasm it's going to save me a ton of time.
I've tried using binaryen, and I've also tried emitting raw wasm by hand, and the latter was far easier. It only took ~200 lines of wasm-specific code.
i found assembly is easier to assemble from scratch (it's apple and orange but.). Most materials should exclude these tooling, mostly rust based tools. We should be able to write them by hands just like when assembly was taught. Compiler and assembly are separate classes. I think it's bad assumption that only compiler devs only care about wasm. It's compiling target, sure, but framing it this way will not broaden its knowledge.
I'm a simple man who has simple needs. I want a better and faster way to pass Go structs in and out of the runtime that doesn't mean I have to do a sword dance on a parquet floor wearing thick knit wool socks and use some fragile grafted on solution.
If there can be a solution that works for more languages: great. I mostly want this for Go. If it means there will be some _reasonable_ limitations, that's also fine.
This is the truth, and it's not really much better in non-GCed languages either. (In reality my impression is the GCed wasm side runtimes are even worse).
Some of the least fun JavaScript I have ever written involved manually cleaning up pointers that in C++ would be caught by destructors triggering when the variable falls out of scope. It was enough that my recollections of JNI were more tolerable. (Including for go, on Android, curiously).
Then once you get through it you discover there is some serious per-call overhead, so those structs start growing and growing to avoid as many calls as possible.
I too want wasm to be decent, but to date it is just annoying.
You're doing native code, this the solution is the same as in native code: your languages agree on a representation, normally C's, or you serialize and deserialize. Mixing language runtimes is just not a nice situation to deal with without the languages having first class support for it, and it should be obvious why.
I am not sure what you actually want but it sounds like something where the component model (the backbone of WASI) might help.
It defines a framework to allow modules to communicate with structured data types by allowing each module to decide how to map it to and from its linear memory (and in future the runtime GC heap)
In your case you could be able to define WIT interfaces for your go types and have your compiler of choice use it to generate all the relevant glue code
That would be more of a library than a WASM spec thing, no? I wrote a code generator that does this well for some internal use-cases.
Still looking forward to when they support OpenMP. We have an experimental Solvespace web build which could benefit quite a bit from that.
https://cad.apps.dgramop.xyz/
Open source CAD in the browser.
Since it hasn't been mentioned here yet: I wonder if the multiple-memories feature will somehow allow to avoid the extra copy that's currently needed when mapping a WebGPU resource. This mapping is available in a separate ArrayBuffer object which isn't accessible from WASM without calling into JS and then copying from the ArrayBuffer into the WASM heap and back.
Multiple WASM memories and Clang's/LLVM's address space feature sound like they should be able to solve that problem, but I'm not sure if it is as trivial as it sounds...
There has been a discussion (https://github.com/WebAssembly/multi-memory/issues/45) on the toolchain support, but I'm not sure if there have been steps to use multiple address spaces to support Wasm multi-memory in LLVM yet.
I'm just getting horrible segmenting and far-pointer vibes of the whole thing, been coding a classic Gameboy game for fun so fiddling with memory mappings is part of the "fun" but for anything non-constrained I'd hate that.
We buried far pointers with DOS and Win16 for a good reason..
I'd take segment-pointers over copying megabytes of memory around anyday though ;)
It's not much different than dealing with all the alignment rules that are needed when arranging data for the GPU.
I'm still hype about WASM. This looks like a cool release. I'm running some pretty high traffic WASM plugins on envoy, running some plugins for terminal apps (zellij), and for one of my toy side projects, I'm running a wasm web app (rust leptos).
For 2 of those 3 use cases, i think it's not technically the optimal choice, but i think that future may actually come. Congratulations and nice work to everyone involved!
Does WASM still have 64 KiB pages? I get why for desktops, but there are use-cases for running WASM on microcontrollers where that's either inconvenient or outright impossible.
The one in particular I have in mind would be to put WASM on graphical calculators, in order to have a more secure alternative to the ASM programs (it's possible nowadays to write in higher-level languages, but the term stuck) that could work across manufacturers. Mid-range has RAM on the order of 256 KiB, but a 32-bit core clocked at more than 200 MHz, so there's plenty of CPU throughput but not a lot of memory to work with.
Sadly, the closest thing there is for that is MicroPython. It's good for what it does, but its performance and capabilities are nowhere near native.
https://github.com/WebAssembly/custom-page-sizes is a proposal championed by my colleague to add single byte granularity to Wasm page sizes, motivated by embedded systems and many other use cases where 64kb is excessive. It is implemented in wasmtime, and a Firefox implementation is in progress.
> Allow Wasm to better target resource-constrained embedded environments, including those with less than 64 KiB memory available.
If it has less than 64 kB of memory how is it going to run a WASM runtime anyway?
And even cheap microcontrollers tend to have more than 64 kB of memory these days. Doesn't not seem remotely worth the complexity.
It's not about the whole microcontroller having less than 64kB of memory - it's that each WASM module has a minimum memory size of 64kB, regardless of how much it actually requires. Also, if you need 65kB of memory, you now have to reserve 2 pages, meaning your app now needs 128kB of memory!
We're working on WASM for embedded over at atym.io if you're interested.
> If it has less than 64 kB of memory how is it going to run a WASM runtime anyway?
There is WARDuino (https://github.com/TOPLLab/WARDuino and https://dl.acm.org/doi/10.1145/3357390.3361029).
A runtime that accepts Wasm modules that use a large fraction of the functionality, there is going to be a RAM requirement in the few KiB to few tens of KiB. There seems to be a branch or fork of Wasm3 for Arduino (https://github.com/wasm3/wasm3-arduino).
If you are willing to do, e.g. Wasm -> AVR AOT compilation, then the runtime can be quite small. That basically implies that compilation does not happen on device, but at deployment time.
> in order to have a more secure alternative to the ASM programs
What security implications are there in graphical calculators in terms of assembler language?
Exam mode, or test mode. It's something that appeared about ten years ago, to ensure that a graphical calculator isn't loaded with cheats or has certain features enabled. The technical reason is that the RESET button no longer clears all of the calculator's memory (think Flash, not RAM) and proctors like to see a flashing LED that tells them everything's fine.
It's a flawed idea and has led to an arms race, where manufacturers lock down their models and jailbreaks break them open. Even NumWorks, who originally had a calculator that was completely unprotected and used to publish all of their source code on GitHub, had to give in and introduce a proprietary kernel and code signing, in order to stop custom firmwares and applications from accessing the LED and stop countries from outlawing their calculators.
Are the cases tamper proof as well? Because it's not like it's hard to open up a calculator and connect the LED somewhere else..
Sad state of affairs. I had no idea this was a thing.
Indeed. I got bit by the programming bug writing utility programs in TI-BASIC on my TI-83. I would've had a very different life trajectory had I not been able to do that.
Why WASM and not, like, java or something?
As in, Java ME?
Unless I'm mistaken, it's been on life support for the past 15 years. It's probably more heavyweight and firmware size/Flash usage is a concern. I don't think performance would be on par with WASM and there are use-cases where that really matters (ray tracing rendering for example). I'm also not sure there are many maintained, open-source implementations for it out there. I've also heard stories that it was quite a mess in practice because it was plagued by bugs and quirks specific to phone models, despite the fact that it was supposed to be a standard.
I'd gladly be proven wrong, but I don't think Java ME has a bright future. Unless you were thinking of something else?
On gc:
> Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm.
There's already a lot misunderstandings about wasm, and I fear that people will just go "It supports GC, so we can just export python/java/c#/go etc."
This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
Relying on the GC features of WASM will require writing code centered around the abstractions for the compiler that generates WASM.
I thought that the purpose of GC in WASM was to allow such higher level languages to be placed there without a bulky runtime also in WASM.
What's the value proposition of WASM GC if not this?
As I understand it, WASM GC provides a number of low level primitives that are managed by the WASM host runtime, which would theoretically allow languages like Go or Python to slim down how much of their own language runtime needs to be packaged into the WASM module.
But how those languages still need to carry around some runtime of their own, and I don't think it's obvious how much a given language will benefit.
>But how those languages still need to carry around some runtime of their own
Also just there will be a special version of those language runtimes which probably won't be supported in 10 years time. Just like a lot of languages no longer have up to date versions that can run on the common language runtime.
Programming languages with type erasure would have no runtime, just raw program code and the WASM GC. Languages that have runtime types still need a runtime for that functionality.
The Kotlin wasm compiler was basically engineered on top of wasm's GC support. Works fairly OK. As far as I understand it's essentially the same garbage collector that is also used for regular javascript.
> This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
I don't think that's necessarily true anymore. But as you say, it depends on the compiler you use and how well it utilizes what is there. Jetbrains has big plans with Kotlin and Wasm with e.g. compose multiplatform already supporting it (in addition to IOS native and Android).
Dart is in a more advanced state in that front.
Wasm-GC are abstractions for compiler writers to enable GC dependent languages to run without shipping a GC to run inside the already GC'd browser/Wasm heap and instead just use the browser GC directly.
So yes, Java,C#,etc will work better (If you look at the horrible mess the current C# WASM export generates it basically ships with an inner platform containing a GC), and no, it will explicitly not speak with "javascript" objects (you can keep references to JS objects, but you cannot call JS methods directly).
C# cannot be compiled to WASM GC yet: https://github.com/WebAssembly/gc/issues/77.
This isn’t true at all in Dart for example which is a WASM-GC language. Literally one of the very main selling points of Dart is you write your code once and it runs anywhere, WASM is just another compile target like x64 or RISC-V or iOS.
Still no mention of DOM.
<sets alarm for three years from now>
See you all for WASM 4.0.
That old thing again ;)
Direct DOM access doesn't make any sense as a WASM feature.
It would be at best a web-browser feature which browser vendors need to implement outside of WASM (by defining a standardized C-API which maps to the DOM JS API and exposing that C API directly to WASM via the function import table - but that idea is exactly as horrible in practice as it sounds in theory).
If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too. Just make sure that the JS code has enough 'meat', e.g. don't call accross the WASM/JS boundary for every single DOM method call or property change. While the call itself is fast, the string conversion from the source language specific string representation on the WASM heap into JS strings and back is not free (getting rid of this string marshalling would be the only theoretical advantage of a 'native' WASM DOM API).
WASM is an abbreviation for WebAssembly. If it doesn't have DOM access, WebAssembly is as related to the Web as JavaScript is to Java. A language ecosystem with no I/O capability is as much use as a one-legged man at an arse-kicking party.
Webgl can't access the dom either
Well, arguably the worst thing about WASM is the naming.
It's neither directly related to the web, nor is it an assembly syntax.
It's just another virtual ISA. "Direct DOM access for WASM" makes about as much sense as "direct C++ stdlib access for the x86 instruction set" - none ;)
Oh wow, that really is terrible naming... I always thought WASM was a specification for compiling code into something that runs natively in web browsers—like a web-specific compilation target.. Today I learned.
It's an instruction set architecture that browsers happen to support executing directly.
If you want to compare the situation to x86, direct DOM access for WebAssembly is more akin to the BIOS than C++ stdlib access. If it can't interact with the outside world, it's just a very special toy that you can only use to play a game that isn't any fun, and a good candidate for those 'What's the next COBOL?' discussions that come up every now and then.
In German, often things are named for where they came from, like Berliner or Frankfurter. WebAssembly came from the web, so makes sense :)
Like C, which offloads IO to the standard library?
It's still there - you can still do I/O in C, even if you have to call a library function. In WebAssembly, there's no mechanism for I/O of any sort.
But that's not the (original) argument being made. Just as IO belongs in POSIX and not C, DOM access belongs in some other standard, not WASM
There is, it is just called WASI and it specifies syscalls in a different way.
WebASM is an assembly-like dialect, after all.
...in WASM you also call a function to do IO though? That function is just provided by the host environment via the function import table, but conceptually it's the exact same thing as a Linux syscall, a BIOS int-call or calling into a Windows system DLL.
Isn’t the whole reason why people want DOM access is so that the JavaScript side doesn’t have any meat to it and they can write their entire web app in Rust/Go/Swift/etc compiled to webasm without performance concerns?
Spoiler: there will be performance concerns.
The bottleneck is in the DOM operations themselves, not javascript. This is the reason virtual-dom approaches exist: it is faster to operate on an intermediate representation in JS than the DOM itself, where even reading an attribute might be costly.
This isn't true. DOM access is fast. DOM manipulation is also fast. The issue is many DOM manipulations happening all at once constantly that trigger redraws. Redrawing the DOM can also be fast if the particular DOM being redrawn is relatively small. React was created because Facebook's DOM was enormous. And they wanted to constantly redraw the screen on every single interaction. So manipulating multiple elements simultaneously caused their rendering to be slow. So they basically found a way to package them all into a single redraw, making it seem faster.
> without performance concerns?
WASM isn't going to magically make the DOM go faster. DOM will still be just as slow as it is with Javascript driving it.
WASM is great for heavy-lifting, like implementing FFMPEG in the browser. DOM is still going to be something people (questionably) complain about even if WASM had direct access to it. And WASM isn't only used in the browser, it's also running back-end workloads too where there is no DOM, so a lot of use cases for WASM are already not using DOM at all.
> Direct DOM access doesn't make any sense as a WASM feature.
…proceeds to explain why it does make sense…
It's not a WASM feature, but would be a web browser feature outside the WASM standard.
E.g. the "DOM peeps" would need to make it happen, not the "WASM peeps".
But that would be a massive undertaking for minimal benefit. There's much lower hanging fruit in the web-API world to fix (like for instance finally building a proper audio streaming API, because WebAudio is a frigging clusterf*ck, and if any web API would benefit from an even minimal reduction of JS <=> WASM marshalling overhead it would be WebGL2 and WebGPU, not the DOM. But even for WebGL2 and WebGPU the cost inside the browser implementation of those APIs is much higher than the WASM <=> JS marshalling overhead.
so the feature does make sense, it’s just the implementation crosses a Conway’s law boundary
(I also want this feature, to drive DOM mutations from an effect system)
> If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too.
From the point of view of someone who doesn't do web development at all, and to whom JS seems entirely cryptic: This argument is weird. Why is this specific (seemingly extremely useful!) "web thing" guarded by a specific language? Why would something with the generality and wide scope of WASM relegate that specific useful thing to a particular language? A language that, in the context of what WASM wants to do in making the web "just another platform", is pretty niche (for any non-web-person)?
For me, as a non-web-person, the big allure of WASM is the browser as "just another platform". The one web-specific thing that seems sensible to keep is the DOM. But if manipulating that requires learning web-specific languages, then so be it, I'll just grab a canvas and paint everything myself. I think we give up something if we start going that route.
Think of it as traditional FFI (foreign function interface) situation.
Many important libraries have been written in C and only come with a C API. To use those libraries in non-C languages (such as Java) you need a mechanism to call from Java into C APIs, and most non-C language have that feature (e.g. for Java this was called JNI but has now been replaced by this: https://docs.oracle.com/en/java/javase/21/core/foreign-funct...), e.g. C APIs are a sort of lingua franca of the computing world.
The DOM is the same thing as those C libraries, an important library that's only available with an API for a single language, but this language is Javascript instead of C.
To use such a JS library API from a non-JS language you need an FFI mechanism quite similar to the C FFI that's been implemented in most native programming languages. Being able to call efficiently back and forth between WASM and JS is this FFI feature, but you need some minimal JS glue code for marshalling complex arguments between the WASM and JS side (but you also need to do that in native scenarios, for instance you can't directly pass a Java string into a C API).
Wow, I've used JNI many times, but many years ago. It is a bit painful. Cool to see it's been replaced by FFM, didn't know that existed.
Sure, but if someone came at the C-centric ecosystem today and said "let's do the work to make it so that any language can play in this world", then surely "just FFI through C" would be considered rather underwhelming?
Well that's what the WASM Component Model set out to solve, some sort of next-gen FFI standard that goes beyond C APIs:
https://component-model.bytecodealliance.org/
In my opinion it's an overengineered boondoggle, since "C APIs ought to be good enough for anything", but maybe something useful will eventually come out of it, so far it looks like it mostly replaces the idea of C-APIs as lingua-franca with "a random collection of Rust stdlib types" as lingua-france, which at least to me sounds utterly uninteresting.
While it can function as an FFI (it is indeed the basis of WASI) the component model is more about composability and interfaces
The practical argument is that while initially the DOM API was developed to be language agnostic with more of an eye to Java/C++ than JavaScript since a while ago this is no longer the case and many web APIs use JavaScript data types and interfaces (eg async iterators) that do not map well to wasm
The good news is that you can use very minimal glue code with just a few functions to do most JavaScript operations
> Direct DOM access doesn't make any sense as a WASM feature.
I disagree. The idea of doing DOM manipulation in a language that is not Javascript was *the main reason* I was ever excited about WASM.
> The idea of doing DOM manipulation in a language that is not Javascript
...is already possible, see for instance:
https://rustwasm.github.io/docs/wasm-bindgen/examples/dom.ht...
You don't need to write Javascript to access the DOM. Such bindings still call JS under the hood of course to access the DOM API, but that's an implementation detail which isn't really important for the library user.
While technically possible - the calls to javascript slow things down and you're never going to get the performance of just writing javascript in the first place, much less the performance of skipping javascript altogether.
The calls to JS are quite cheap, when trusting the diagrams in here it's about 10 clock cycles on a 2 GHz CPU per call (e.g. 200 million calls per second):
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
The only thing that might be expensive is translating string data from the language-specific string representation on the WASM heap into the JS string objects expected by the DOM API. But this same problem would need to be solved in a language-portable way for any native WASM-DOM-API, because WASM has no concept of a 'string' and languages have different opinions about what a string looks like in memory.
But even then, the DOM is an inherently slow API starting with the string-heavy API design, the bit of overhead in the JS shim won't suddenly turn the DOM into a lightweight and fast rendering system.
E.g. it's a bit absurd to talk about performance and the DOM in the same sentence IMHO ;)
Wasm could have had a concept of a string. I frequently mourn the rejection of the imo excellent stringref proposal. https://github.com/WebAssembly/stringref
wasm 3 has the JavaScript string built-in now according to TFA
If your language and its compiler use JS String Builtins (part of Wasm 3.0) for their strings, then there is no cost to give them to JS and the DOM.
Fair enough, thanks for sharing!
Dart also has this and as you can see in the examples in the README the APIs look exactly the same as what you are used to in JavaScript but now are fully typed and your code compiles to WASM.
https://github.com/dart-lang/web
You don't get it.
... maybe you don't get it?
_Telling the browser how you want the DOM manipulated_ isn't the expensive part. You can do this just fine with Javascript. The browser _actually redrawing after applying the DOM changes_ is the expensive part and won't be any cheaper if the signal originated from WASM.
Don't get what, exactly?
Wasm doesn't specify any I/O facilities at all. DOM is no different. There's a strict host/guest boundary and anything interacting with the outside world enters Wasm through an import granted by the host. On the web, the host is the JS runtime.
Don't sleep on the Rust toolchain for this! You can have DOM-via-Wasm today, the tools generate all the glue for you and the overhead isn't that bad, either.
Yep and with better raw DOM performance than React:
https://leptos.dev/
https://krausest.github.io/js-framework-benchmark/current.ht...
Got a rec? The reply to you is talking about a component framework, rather than actual vanilla html/css access. I haven't seen anything, personally, that allows real-time, direct DOM interaction.
https://github.com/wasm-bindgen/wasm-bindgen is the tool for raw access to the DOM APIs.
DOM wouldn't be part of WASM, it'd be part of the host.
If there ever is a WASM-native DOM API, WASM GC should help a lot with that.
https://danfabulich.medium.com/webassembly-wont-get-direct-d...
Wasm 3.0, with its GC and exception support, contains everything you need. The rest is up to the source language to deal with. For example, in Scala.js [1], which is mentioned in the article, you can use the full extent of JavaScript interop to call DOM methods right from inside your Scala code. The compiler does the rest, transparently bridging what needs to be.
[1] https://www.scala-js.org/doc/project/webassembly.html
I wish the same mate, Please wasm team, I am more than happy waiting 3 years if you can guarantee that you are looking into the best way possible into integrating this feature of dom manipulation.
I sometimes feel like js is too magic-y, I want plain boring golang and want to write some dom functions without using htmx preferably.
Please give us more freedom! This might be the most requested feature and this was how I came across knowing wasm in the first place (leptos video from some youtuber I think, sorry if i forgot)
I was even trying to be charitable and read the feature list for elements that would thin down a third party DOM access layer, but other than the string changes I’m just not seeing it. That’s not enough forward progress.
WASM is just an extremely expensive toy for browsers until it supports DOM access.
It's a chicken egg situation. The people already using WASM either don't care about the DOM or had realized long ago that going through a JS shim works just as well, the rest just complain time and time again that WASM has no DOM access whenever there's a HN thread about WASM, but usually don't even use WASM for anything.
Especially if there was major momentum of people writing their web applications with wasm, there would be a reason to eventually get that massive undertaking of creating the ABI for that working. Then all those applications could just recompile to make use of this new hypothetically faster API. The bigger issue here is that it just doesn't make any sense to write frontend code in rust or go or whatever in the first place.
The whole js ecosystem evolved to become a damn good environment to write UIs with, people don't know the massive complexity this environment evolved to solve over decades.
My old team shipped a web port of our 3D modeling software back in 2017. The entire engine is the same as the desktop app, written in C++, and compiled to wasm.
Wasm is not now and will never be a magic "press here to replace JS with a new language" button. But it works really well for bringing systems software into a web environment.
If you give WASM access to everything, you've defeated the main reason it exists. Ambient authority is the reason we need WASM in the first place.
It's explicitly negated from the Wasm-GC spec, too damn much security issue surface that keeps all of the browser makers solidly in the "do not want to touch" camp.
Is there a technical reason for the web limit to be 16 GB specifically? Or is it just a round number picked so that the limit could be standardized? Also, has the limit on JS heap size (and ArrayBuffer size) also been relaxed to 16 GB or is it still lower?
I really hope this spurs AssemblyScript to just port to WASM GC: https://github.com/AssemblyScript/assemblyscript/issues/2808
There's comments in there about waiting for a polyfill, but GC support is widespread enough that they should probably just drop support for non-GC runtimes in a major version.
I don't think the GC in this version has the features required to enable a C# runtime on top of it yet: https://github.com/WebAssembly/gc/issues/77
I wonder what language this GC can actually be used for at this stage?
The article answers your question, there are at least 6 languages: Java, OCaml, Scala, Kotlin, Scheme, and Dart.
OCaml with wasocaml: https://github.com/OCamlPro/wasocaml
Dart for a long time now.
I'm not familiar with all the implementation details of objects in C#, but the list of issues mixes runtime implementation details (object layouts) that should be fairly low effort to work around with actual language/runtime features (references, finalization).
In general though most regular C# code written today _doesn't directly_ use many of the features mentioned apart from references. Libraries and bindings however do so a lot since f.ex. p/invoke isn't half as braindead as JNI was, but targeting the web should really not bring along all these libraries anyhow.
So, making a MSIL runtime that handles most common C# code would map pretty much 1-1 with Wasm-GC, some features like ref's might need some extra shims to emulate behaviour (or compiler specializations to avoid too bad performance penalties by extra object creation).
Regardless of what penalties,etc goes in, the generated code should be able to be far smaller and far less costly compared to the situation today since they won't have to ship both their own GC and implement everything around that.
Part of the problem is you would need to fork the base class libraries and many popular nuget packages to remove any uses of ref/in/out, along with any interior references, spans, etc. The .NET type system has allowed 'interior references' (references into the inside of a GC object) for a long time and it's difficult to emulate those on top of WasmGC, especially if your goal is to do it at low cost.
It's definitely true that you could compile some subset of C# applications to WasmGC but the mismatch with the language as it's existed for a long time is painful.
Kotlin/WASM is a thing
Is the component model work (https://component-model.bytecodealliance.org/) related to the 3.0 release in any way?
No, the component model proposal is not part of the Wasm 3.0 release. Proposals only make it into a Wasm point release once they reach stage 5, and the component model is still under development and so is not trying to move through the phases yet.
Unlike any of the proposals which became part of Wasm 3.0, the component model does not make any changes to the core Wasm module encoding or its semantics. Instead, it’s designed as a new encoding container which contain core Wasm modules, and adds extra information alongside each module describing its interface types and how to instantiate and link those modules. By keeping all of these additions outside of core Wasm, we can build implementations out of any plain old Wasm engine, plus extra code that instantiates and links those modules, and converts between the core wasm ABI to higher level interface types. The Jco project https://github.com/bytecodealliance/jco does exactly that using the common JS interface used by every web engine’s Wasm implementation. So, we can ship the component model on the web without web engines putting in any work of their own, which isn’t possible with proposals which add or change core wasm.
Thanks for clarifying.
This looks like a great release! Lots of stuff people have wanted for a long time in here.
Tail calls. Tail calls!
The tail call instructions (return_call and friends) were crucial for compiling Scheme. Safari had a bug in their validator for these instructions but the fix shipped in their most recent release so now you can use Wasm tail calls to their fullest in all major browsers.
Has anyone benchmarked 64bit memory on the current implementations? There's the potential for performance regressions there because they could exploit the larger address space of 64bit hosts to completely elide bounds checks when running 32bit WASM code, but that doesn't work if the WASM address space is also 64bit.
> WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.
> This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
Oof, that's unfortunate. I'm sure there's good reasons why WASM works like it does but the requirement for OOB to immediately abort the program seems rough for performance, as opposed to letting implementations handle it silently without branching (e.g. by masking the high bits of pointers so OOB wraps around).
Is this WASM specific though? Some apps suffer in performance when they move to 64-bit in general due to larger pointers and not taking sufficient advantage of/or needing 64-bit data types in general, hence the increased memory bandwidth/cache space slows them down (one of the reasons many people like a 32-bit address space, 64-bit data model).
The blog post explains that it's more than that. Bounds checking, in particular, costs more for reasons having to do with browser implementations, for example, rather than for architectural reasons.
One bright point here is that the WASM changes may force v8 to improve its IPC by having a feature that Bun gets from JSC, which is passing strings across isolate boundaries.
IPC overhead is so bad in NodeJS that most people don’t talk about it because the workarounds are just impossibly high maintenance. We reach straight for RPC instead, and downplay the stupidity of the entire situation. Kind of reminiscent of the Ruby community, which is perhaps not surprising given the pedigree of so many important node modules (written by ex Rails devs).
Wasm 3.0 looks like a significant step forward. The addition of 64-bit address space and improved reference typing really expands the platform’s capabilities. Integration with WASI makes access to system resources more predictable, but asynchronous operations and JS interop remain key pain points. Overall, this release makes Wasm more viable not just in the browser, but also for server-side and embedded use cases.
Oh no, right after I started writing a binary decoder for 2.0. Does anybody know how much this changes things as far as a decoder is concerned?
Wasm only gets additive changes - the binary format can't change in a way that breaks any previously existing programs, because that would break the Web. So, you just have to add more opcodes to your implementation.
Awesome, thanks!
It introduces new types (structs and arrays), a new section for tags, and several dozen instructions (first-class functions, GC, tail calls, and exception handling). It generalizes to multiple memories and tables, as well as adding 64-bit memories. The binary format changes aren't too bad, but it's a fairly big semantic addition.
This looks like a huge release for C# and Java I guess. Half of the features are useful elements they no longer have to polyfill.
64-bit addr space and deterministic profiles ftw!
Really nice new set of features.
Can QuickJS run in WASM3.0 with deterministic profile?
That would be pretty rad!
Having wasm 3.0 and a project named wasm3 which doesn't seem to support wasm 3.0 is sure going to get confusing!
Does anyone know whether the exception handling implementation supports restartable exceptions like Common Lisp's and Scheme's?
Speaking for CL, it seems so for me.
The whole magic about CL's condition system is to keep on executing code in the context of a given condition instead of immediately unwinding the stack, and this can be done if you control code generation.
Everything else necessary, including dynamic variables, can be implemented on top of a sane enough language with dynamic memory management - see https://github.com/phoe/cafe-latte for a whole condition system implemented in Java. You could probably reimplement a lot of this in WASM, which now has a unwind-to-this-location primitive.
Also see https://raw.githubusercontent.com/phoe-trash/meetings/master... for an earlier presentation of mine on the topic. "We need means of unwinding and «finally» blocks" is the key here.
No, that functionality would fall under the stack-switching proposal, which builds on the tags of Wasm exception handling.
Doesn't look like they took anything out.
But looks like you still cannot open a raw TCP or UDP socket? Who needs this internet network thing huh?
I appreciate it is a potential security hole, but at least make it behind a flag or something so it can be turned on.
Opening a socket would fall under WASI[1].
[1] https://github.com/WebAssembly/wasi-sockets
Great work. WASM will eat the world :D.
[dead]
[dead]
> GC and Exception handling
This was not necessary.. what a mistake, specially EH..
Not including GC would have been a mistake. Having to carry a complete garbage collector with every program, especially on platforms like browsers were excellent ones already exist, would have been a waste.
It's also important because sometimes you want a WebAssembly instance to hold a reference to a GC object from Javascript, such as a DOM object, or be able to return a similar GC object back to Javascript or to another separate WebAssembly instance. Doing the first part alone is easy to do with a little bit of JS code (make the JS code hold a reference to the GC object, give the Wasm an id that corresponds to it, and let the Wasm import some custom JS functions that can operate on that id), but it's not composable in a way that lets the rest of those tasks work in a general way.
Doesn't every WASM program have to carry its own malloc/free today?
Yes, every wasm program that uses linear memory (which includes all those created by llvm toolchains) must ship with its own allocator. You only get to use the wasm GC provided allocator if your program is using the gc types, which can’t be stored in a linear memory.
Yes, but Emscripten comes with a minimal allocator that's good enough for most C code (e.g. code with low alloc/free frequency) and only adds minimal size overhead:
https://github.com/emscripten-core/emscripten/blob/main/syst...
how is that different from compiling against a traditional CPU which also doesn't have a built in GC? i mean those programs that need a GC already have one. so what is the benefit of including one on the "CPU"?
The fact that a minimum size go program is a few megabytes in size is acceptable in most places in 2025. If it was shipped over the wire for every run time instead of a single install time download, that would be a different story.
Garbage collection is a small part of the go run time, but it's not insignificant.
I will be interested to see if Go is able to make use of this GC and if so, how much that wasm binaries
https://github.com/golang/go/issues/63904
Skimming this issue, it seems like they weren't expecting to be able to use this GC. I know C# couldn't either, at least based on an earlier state of the proposal.
this thread confirms my suspicions. some languages may benefit from a built in GC, but those languages probably use a generic GC to begin with. wheras any language that has a highly optimized GC for their own needs won't be able to use this one.
The "CPU" in every browser already has one. This lets garbage-collected languages use that one. That's an enormous savings in code size and development effort.
i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
so most GC-languages being ported to webassembly already have a GC, so what is the benefit of using a provided GC then?
on the other hand i see GC as a feature that could become part of any modern CPU. then the benefit would be large, as any language could use it and wouldn't have to implement their own at all anymore.
Aside from code size the primary benefit on the Web is that the GC provided to wasm is the same one for the outer JavaScript engine, so an object from wasm can stay alive get collected based on whether JS keeps references to it. So it’s not really about providing a GC for a single wasm module (program), its about participating in one cooperatively with other programs.
now that would make a lot of sense, thanks
Writing a GC that performs well often involves making decisions that are tightly coupled to the processor architecture and operating system as well as the language implementation's memory representations for objects. Using a GC that is already present can solve that problem.
> i don't see the reduced development effort, after all, unless the language is only running on webassembly i still need to implement my own GC for other CPUs.
I'd think porting an existing GC to WASM is more effort than using WASM's GC for a GC'd language?
i don't think so. first of all, you don't rewrite your code for every CPU but you just adapt some specific things. most code is just compiled for the new architecture and runs. second, those languages that are already running on wasm have already done the work. so at best new languages who haven't been ported yet will get any benefit from a reduced porting effort.
I think it's a "you don't pay for it if you don't use it" thing, so I guess it's fine. It won't affect me compiling my C or Zig code to WASM for instance since those languages have neither garbage collection nor exceptions.
It's kinda nice to have 1st class exception support. C++ exceptions barely work in Emscripten right now. Part of the problem is that you can't branch to arbitrary labels in WASM.
WASM isn't a language, so them adding stuff like this serves to increase performance and standardize rather than forcing compilers to emulate common functionality.
This allows more languages to compile to it. You don't need to use these features if you don't want to.
Besides making it much nicer for GC'd languages to target WASM, an important aspect is that it also allows cross-language GC.
Whereas with a manual GC, if you had a JS object holding a reference to an object on your custom heap, and your heap holds a reference to that JS object (with indirections sprinkled in to taste) but nothing else references it, that'd result in a permanent memory leak, as both heaps would have to consider everything held by the other as GC roots; so you'd still be forced to manually avoid cycles despite only ever using GC'd languages. Wasm GC entirely avoids this problem.
There's a joke in Brazil saying "Brazil is the country of the future and will always be that. It will never be the country of the present".
WASM is and will always be the greatest technology of the future. It will never be the greatest technology of the present.
WASM enables some pretty cool apps in the present, though. I think Figma heads the list.
We use it heavily here at Ditto. It's fantastic.
Wasm is one of the best solutions for running untrusted code. Alternative are more complicated or have limited language choices.
steve job's ghost will prevent wasm adoption.
> steve job's ghost will prevent wasm adoption.
https://webassembly.org/features/
That isn't updated for Safari 26, but by that table Safari 18 is only missing 3 standardized features that Chrome supports, with a fourth that is disabled by default. So what's the point of your comment? Just to make noise and express your ignorance?
Historically speaking, apple has consistently limited web app functionality on iOS since 2008. I think we would be much further ahead if it wasn't for Apple’s policies under his leadership.
Apple took over the distribution to prioritize a cut to the app store which crippled/slowed the open web PWA and WASM adoption.
Sure, and that's why Asm.JS(regular JS with special semantics) and later Wasm(bytecode translateable to JS) was so brilliant. It already worked on Safari, they had the option of either:
A: look slow compared to other engines that supported it
B: implement it
Now, stuff like the exception handling stuff and tail calls probably aren't shimmable via JS, but at this point they don't gain much from being obstructionists.
What ignorance? Safari doesn't support the most important additions:
- memory64
- multiple memories
- JSPI (!!)
I recently explored the possibility of optimizing qemu-wasm in browser[0].. and it turns out that the most important features were those Safari doesn't implement.
[0] https://zb3.me/qemu-wasm-test/
As a _user_ JSPI looks neat, however as a compiler writer JSPI looks like an horrible hairball of security issues and performance gotchas to degrade generated WASM code performance.
Say you have a WASM module, straight line code that builds a stack, runs quickly because apart from overflow checks it can just truck on.
Now add this JS-Promise thing into the mix:
A: now how does a JS module handle the call into the Wasm module? classic WASM was a synchronous call, now should we change the signature of all wasm functions to async?
B: does the WASM internal calls magically become Promises and awaits (Gonna take a lot of performance out of WASM modules), if not we now have a 2 color function world that needs reconciliation.
C: If we do some magic, where the full frame is paused and stored away, what happens if another JS function then calls into the WASM module and awaits and then the first one resumes? Any stack inside the wasm-memory has now potential race conditions (and potentially security implications). Sure we could make locks on all Wasm entries but that could cause other unintended side-effects.
D: Even if all of the above are solved, there's still the low-level issues of lowlevel stack management for wasm compiled code.
Looking at the mess that is emscripten's current solution to this, I really hope that this proposal gets very well thought out and not just railroaded in because V8's compiler manages to support it because.
1: It has the potential to affect performance for all Wasm code just because people writing Qemu,etc are too lazy to properly abstract resource loading to cooperate with the Wasm model.
2: It can become a burden on the currently thriving Wasm ecosystem with multiple implementations (honestly, stuff like Wasm-GC is less disruptive even if it includes a GC).
Regarding C - yes, multiple stacks should be supported, and I literally opened a PR to add coroutine support based on JSPI to emscripten: https://github.com/emscripten-core/emscripten/pull/25111
JSPI-based coroutines are much faster than the old Asyncify ones (my demo shows that).
As for your core message - I'm just the user, but if Google engineers were able to implement that, then it is possible to implement that securely. I remember Google engineers arguing with Apple engineers in GH issues, but I'm not on that level, I just see that JSPI is already implemented in Chrome, so you can't tell me it's not possible.
Multiple memories and Memory64 just became part of the spec. And JSPI is still being standardized. Is Safari slower to roll out new things? Yes. But it's hardly stopping adoption. Chrome has 70% of the browser market, Safari barely has 15%.
But Apple doesn't allow using other browser engines on iOS, so this matters much more. I mean, they were forced to allow them, but of course they didn't actually comply, they created artificial barriers to ensure only Safari can be used on iOS.
EDIT: By "safari" here I actually mean WebKit.
For the mobile browser market, Chrome is still around 70%, Safari is a bit better off in mobile at 20% of global browser market share. That's still a minority platform. It's not inhibiting wasm feature adoption with those numbers.