We use WASM quite a bit for embedding a ton of Rust code with very company specific domain code into our web frontend. Pretty cool, because now your backend and frontend can share all kinds of logic without endless network calls.
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
The whole WASM story is confusing to me.
BlackFly 15 hours ago [-]
My reading of it is that the people furthering WASM aren't really associated with just browsers anymore and they are building a whole new VM ecosystem that the browser people aren't interested in. This is just my take since I am not internal to those organizations. But you have the whole web assembly component model and browsers just do not seem interested in picking that up at all.
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
boomskats 14 hours ago [-]
> So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access
I don't think this is entirely fair or accurate. This isn't how Wasm runtimes work. Making it possible for the sandbox to explicitly request specific resource access is not quite the same thing as what you're implying here.
> The browser is the main driving force for WASM, as I see it
This hasn't been the case for a while. In your first paragraph you yourself say that 'the people furthering WASM are [...] building a whole new VM ecosystem that the browser people aren't interested in' - if that's the case, how can the browser be the main driving force for Wasm? It's true, though, that there's verey little revenue in browser-based Wasm. There is revenue in enterprise compute.
> because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container
Not exactly true when you consider that docker containers are orders of magnitude bigger, slower to mirror and start up, require architecture specific binaries, are not great at actually 'containing' fallout from insecure code, supply chain vulns, etc.. The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial. They just rarely make it to the hn frontpage, because they really are boring.
However, the Wasm push in enterprise compute is real, and the value is real. But you're right that the ecosystem and its sponsorship is still struggling - in some part due to lack of support for the component model by the browser people. The component model support introduced in go 1.25 has been huge though, at least for the (imho bigger) enterprise compute use case, and the upcoming update to the component model (wasi p3) should make a ton of this stuff way more usable. So it's a really interesting time for Wasm.
serbuvlad 13 hours ago [-]
> The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial.
What are you talking about? Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.
If a service has a multi-gig container, that is for other stuff than the Docker overhead itself, so would also be a multi-gig app for WASM too.
Also, Docker images get overlayed. So if I have many Go or Rust apps running on Alpine or Debian as simple static binaries, the 5MB/30MB base system only exists once. (Same as a wasmtime binary running multiple programs).
jauntywundrkind 4 hours ago [-]
> Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.
That's not the deployment model of wasm. You don't ship the runtime and the code in a container.
If you look at crun, it can detect if your container is wasm and run it automatically, without your container bundling the runtime. I don't know what crun does, but in wasmcloud for example, you're running multiple different wasm applications atop the same wam runtime. https://github.com/containers/crun/blob/main/docs/wasm-wasi-...
serbuvlad 30 minutes ago [-]
My point is that that's exactly the deployment model of Docker. So if I have 20 apps that are a Go binary + config on top of Alpine, that Alpine layer will only exist once and be shared by all the containers.
If I have 20 apps that depend on a 300MB bundle of C++ libraries + ~10MB for each app, as long as the versions are the same, and I am halfway competent at writing containers, the storage usage won't be 20 * 310MB, but 300MB + 20 * 10MB.
Of course in practice each of the 20 different C++ apps will depend on a lot of random mutually exclusive stuff leading to huge sizes. But there's rarely any reason for 20 Go (or Rust) apps to base their containers on anything other than lean Alpine or Debian containers.
Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique.
"Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.
Assuming the memory usage of the wasm runtime (.data and .bss modifications + stack and heap usage) is vaguely k + sum(p_i) where p_i is some value associated with process i, then running a single runtime instead of running n runtimes saves (n - 1) * k memory. The question then becomes how much is k. If k is small (a couple megs), then there really isn't any significant advantage to it, unless you're running an order of magnitude more wasm processes than you would traditional containers. Or, in other words if p_i is typically small. Or, in other other words, if p_i/k is small.
If p_i/k is large (if your programs have a significant size), wasi provides no significant size advantage, on disk or in memory, over just running the wasm runtime in a traditional container. Maybe there are other advantages, but size isn't one of them.
rkangel 7 hours ago [-]
These numbers are true, but you'd be amazed and the number of organisations that have containers that are just based on ubuntu:latest, and don't strip package cache etc.
serbuvlad 7 hours ago [-]
ubuntu:latest is also 30MB, like Debian.
Obviously an unoptimized C++/Python stack that depends on a billion .so's (specific versions only) and pip packages is going to waste space. The advantage of containers for these apps is that it can "contain" the problem, without having to rewrite them.
The "modern" languages: Go and Rust produce apps that depend either only on glibc (Rust) or on nothing at all (Rust w/ musl and Go). You can plop these binaries on any Linux system and they will "just work" (provided the kernel isn't ancient). Sure, the binaries can be fat, but it's a few dozen megabytes at the worst. This is not an issue as long as you architect around it (prefer busybox-style everything-in-a-binary to coreutils-style many-binaries).
Moreover, a VM isn't much necessary, as these programming languages can be easily cross-compiled (especially Go, for which I have the most experience). Compared to C/C++ where cross-compiling is a massive pain which led to Java and it's VM dominating because it made cross-compilation unnecessary, I can run `GOOS=windows GOARCH=arm64 go build` and build a native windows arm64 binary from x86-64 Linux with nothing but the standard Go compiler.
The advantage of containers for Rust and Go lies in orchestration and separation of filesystem, user, ipc etc. namespaces. Especially orchestration in a distributed (cluster) environment. These containers need nothing more than the Alpine environment, configs, static data and the binary to run.
I fail to see what problem WASM is trying to solve in this space.
octopoc 6 hours ago [-]
You know what would be cool? A built in way for your browser to automatically download and run local-first software as a docker container, in the background without user confirmation.
The problem with that idea is docker isn’t as secure as wasm is. That’s one big difference: wasm is designed for security in ways that docker is not.
The other big difference is that wasm is in-process, which theoretically should reduce overhead of switching between multiple separate running softwares.
immibis 3 hours ago [-]
That wouldn't be cross-platform. Browsers couldn't even ship SQL because it would inevitably tie them to sqlite, specifically, forever. They definitely can't ship something that requires a whole Linux kernel.
nightpool 7 hours ago [-]
Surely moving those containers to alpine would be 1000x easier than rewriting everything in wasm though.
boomskats 1 hours ago [-]
Yeah, what you're talking about there is not what I was talking about.
pjmlp 13 hours ago [-]
Meanwhile the people using already established VM ecosystems, don't a value dropping several decades of IDEs, libraries and tools, for yet another VM redoing more or less the same, e.g. application servers in Kubernetes with WASM containers.
WASM as it is, is good enough for non-trivial graphics and geometry workloads - visibility culling (given octree/frustum), data de-serialization (pointclouds, meshes), and actual BREP modeling. All of these a) are non-trivial to implement b) would be a pain to rewrite and maintain c) run pretty swell in the wasm.
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
flohofwoe 14 hours ago [-]
> I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often.
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
weinzierl 15 hours ago [-]
For performant WASM/JS interchange you might be interested in Sledgehammer Bindgen.
> Why create a generic bytecode execution platform and limit the use case so much?
How would you make such a thing without limiting it in some such way?
nothrabannosir 9 hours ago [-]
By giving it dom support
fabrice_d 5 hours ago [-]
Which meant having garbage collection working accross the WASM/JS barrier. This is now possible, but was not exactly trivial to design. It's a good thing that this was not rushed out.
chrismorgan 5 hours ago [-]
Check the context of the quote. DOM support is unrelated, it was about the Rust/TypeScript interface.
nothrabannosir 3 hours ago [-]
What I got from gp is that interface is necessary because the wasm environment itself is limited. Quote:
> You need a lot of inherently slow and unsafe glue code to make anything work.
Idea being that with dom support you’d need less unsafe glue code.
Of course I was being glib but it is the point of TFA after all.
whizzter 5 hours ago [-]
The confusion is perhaps due to your usage focus and the security constraints browser compiler makers face to make something secure.
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
Muromec 5 hours ago [-]
>The whole WASM story is confusing to me.
Think of it as a backend and not as library and it clicks.
api 7 hours ago [-]
> You need a lot of inherently slow and unsafe glue code to make anything work.
That describes much of modern computing.
WhereIsTheTruth 15 hours ago [-]
WASM is not a web scripting language
Trying to shoehorn Rust as a web scripting language was your second mistake
Your first mistake was to mix Rust, TypeScript and JavaScript only just to add logic to your HTML buttons
I swear, things get worse every day on this planet
9dev 14 hours ago [-]
WASM enables things like running a 20 year old CAD engine written in C++ in the browser. It isn’t a scripting language, it’s a way to get high-performing native code into web apps with a sensible bridge to the JS engine. It gets us closer to the web as the universal platform.
austin-cheney 12 hours ago [-]
The biggest problem solved by WASM is runtime portability. For security reasons many users and organizations will not download or install untrusted binaries. WASM provides a safer alternative in an often temporary way. The universal nature is an unintended byproduct of a naive sandbox, though still wonderful.
9dev 7 hours ago [-]
Why would WASM be any less secure than JavaScript?
WhereIsTheTruth 12 hours ago [-]
Exactly, and the CAD engine doesn't need to know about nor access the DOM
> It gets us closer to the web as the universal platform.
As a target
I don't want a pseudo 'universal' platform owned by Big Tech; or by governments as a substitute
So what should they have used to share logic between backend and frontend in a type safe way?
demurgos 14 hours ago [-]
If the logic is compute heavy, Rust with Wasm can be a good approach. TypeScript on both ends is also a pragmatic choice. You can still have a boundary in the backend for lower level layers in Rust.
If the logic is merely about validation, then an IDL with codegen for TS and some backend language is probably better. There are also some more advanced languages targeting transpilation to both JS and a backend language such as Haxe, but they all have some trade-offs.
sakesun 14 hours ago [-]
Actually, WASM will enable many languages better for web scripting than Javascript.
weinzierl 14 hours ago [-]
WASM certainly had the potential for this, but I am afraid without direct DOM access it is never going to happen.
If you're familiar with JS frameworks, you can think of it like this:
Dioxus : React :: Leptos : SolidJS
The key for me is that Leptos leans into a JSX-like templating syntax as opposed to Dioxus's H-like function calls. So, Leptos is a bit more readable in my opinion, but that probably stems from my web dev background.
Not parent but GP: Love Leptos, I think they are on the right track.
Dioxus is good too, I think it has wider scope and they also obtained funding from external sources while Leptos is completely volunteer based.
jokoon 4 hours ago [-]
WASM is just hotfixing javascript to use any language people want.
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
Not to mention js engines are very complicated.
evrimoztamur 12 hours ago [-]
It's not just the DOM, it's also all other APIs like WebGL2.
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
apatheticonion 12 hours ago [-]
The web is a platform that has so much unrealized potential that is absolutely wasted.
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
adastra22 12 hours ago [-]
It should never have been web assembly. WASM is the fulfillment of the dream that started with Java VM in the 90’s but never got realized. A performant, truly universal virtual machine for write-once, run anywhere deployment. The web part is a distraction IMHO.
xeonmc 11 hours ago [-]
What would you propose if you were to rename it?
Generalized Assembly? GASM?
passivegains 8 hours ago [-]
I'd probably go with something like Open Regular Generalized ASseMembly.
falcor84 8 hours ago [-]
How about "Optimized Reduced Generalized Assembly for Simulated Machines"?
msgodel 5 hours ago [-]
The classpath that came with the JVM was much more well thought out than the web is. That's the real problem.
croes 11 hours ago [-]
But the blocking of ad, tracking and miner scripts gets more complicated with WASM
renox 11 hours ago [-]
"Dream", well until you think about i18n and a11y..
adastra22 10 hours ago [-]
What do you mean?
Eavolution 4 hours ago [-]
i18n: internationalisation, a11y: accessibility
adastra22 3 hours ago [-]
What I mean is, what does that have to do with a portable executable binary format?
renox 21 minutes ago [-]
What I mean is that portable execution is only a very small part of 'good' SW: you need also security, i18n, a11y etc.
0x696C6961 11 hours ago [-]
What does a compiler target have to do with accessibility?
secondcoming 12 hours ago [-]
Why did Java fail though, and why would wasm succeed when the underlying philosophy is the same?
Aardwolf 11 hours ago [-]
I'm not sure how it's for others, but for me there was a perception issue with java applets on the web in the mid 2000s:
Java applets loading on a website started as a gray rectangle, which loaded very slowly, and sometimes failed to initialize with an "uninited" error. Whenever you opened a website with a java applet (like could happen with some math or physics related ones), you'd go "sigh" as your browser's UI thread itself halted for a while
Flash applets loading on a website started as a black rectangle, did not cause the UI thread to halt, loaded fast, and rarely gave an error
(the only reason I mention the gray vs black rectangle is because seeing a gray rectangle on a website made me go "sigh")
JavaScript was not yet optimized but the simple JS things that worked, did work without loading time.
Runescape (a 3d MMORPG from the early 2000s that still exists) used Java though and somehow they managed to use it properly since that one never failed to load and didn't halt the browser's UI either despite being way more complex than any math/physics Java applet demo. So if Java forced their applets to do whatever Runescape was doing so correctly, they'd not have had this perception issue...
adastra22 10 hours ago [-]
The fact that we said “Java” and you went to thinking about “Java applets” is part of the problem. Java was meant to be a universal executable format. It ended up confined to the web (mostly, at least in the popular consciousness).
Aardwolf 8 hours ago [-]
Well actually that was because the topic of the article was webassembly :) I have seen Java used for backends / software as well (and other than the lack of unsigned integers for e.g. crypto/hashing/compression/..., and lack operator overloading for e.g. vectors/matrices/bigints/..., it's 'fine' to me)
adastra22 6 hours ago [-]
But that’s the point! “Web” Assembly really has nothing to do with the web, or browsers :) It came out of the web standards groups, that’s all. It is an architecture agnostic executable format.
euroderf 4 hours ago [-]
Yup, "Enterprise Java" really grated.
Until it didn't.
DonHopkins 5 hours ago [-]
That's because Java used the old piece-of-shit NPAPI (Netscape Plugin Application Programming Interface) from 1995, first released in the NetScape 2.0b3 Plug-in SDK:
>I hope NetScape can come up with a plug-in interface that is good enough that they can implement their own navigator components with it (like the mail reader, outliner, progressive jpeg viewer, etc). The only way it's going to go anywhere is if they work closely with developers, and use the plug-in interface for non-trivial things themselves. Microsoft already has a VRML plug-in for their navigator, so presumably they have a plug-in interface, and from what I've seen on their web site, it may not be "good enough", but it's probably going to do a lot more that you can do with NetScape right now, since they're exposing a lot of their navigator's functionality through OLE. They seem to understand that there's a much bigger picture, and that the problems aren't trivial. Java isn't going to magically solve all those problems, folks.
>Wow, a blast from the past! 1996, what a year that was.
>Sun was freaking out about Microsoft, and announced Java Beans as their vaporware "alternative" to ActiveX. JavaScript had just come onto the scene, then Netscape announced they were going to reimplement Navigator in Java, so they dove into the deep end and came up with IFC, which designed by NeXTStep programmers. A bunch of the original Java team left Sun and formed Marima, and developed the Castanet network push distribution system, and the Bongo user interface editor (like HyperCard for Java, calling the Java compiler incrementally to support dynamic script editing).
>At the time that NSAPI came around, JavaScript wasn't really much of a thing, and DHTML didn't exist, so not many people would have seriously thought of actually writing practical browser extensions in it. JavaScript was first thought of more as a way to wire together plugins, not implement them. You were supposed to use Java for that. To that end, Netscape developed LiveConnect.
>Microsoft eventually came out with "ActiveX Behavior Components" aka "Dynamic HTML (DHTML) Behaviors" aka "HTML Components (HTCs)" that enabled you to implement ActiveX controls with COM interfaces in all their glory and splendor, entirely in Visual Basic Script, JavsScript, or any other language supporting the "IScriptingEngine" plug-in interface, plus some XML. So you could plug in any scripting language engine, then write plug-ins in that language! (Easier said than done, though: it involved tons of OLE/COM plumbing and dynamic data type wrangling. But there were scripting engines for many popular scripting languages, like Python.)
>Though Netscape has ceased development efforts on its Java-based browser, it may pass the baton to independent developers.
Shockwave (the Macromedia Director Player Library) came long before Flash, and it used NPAPI (and ActiveX on IE), but later on, Google developed another better plug-in interface called "Pepper" for Flash.
1995: Netscape releases NPAPI for Netscape Nagivator 2.0, Macromedia releases Shockwave Player on NPAPI for playing Director files
1996: Microsoft releases ActiveX, FutureWave releases FutureSplash Animator and NPAPI player for FutureSplash files, Macromedia acquires FutureSplash Animator and renames it Flash 1.0
2009: Google releases PPAPI (Pepper Plugin API) as part of the Native Client project, suddenly Flash runs much more smoothly
dijit 12 hours ago [-]
abundance of the runtime, ease of distribution of programs, the permission model which was bolted on, appropriate sandboxing mechanism leading to authorisation problems and performance when people were not ready to sacrifice a very significant amount for no reason.
Oh, and breaking changes between versions meaning you needed multiple runtimes and still got weird issues in some cases.
w10-1 1 hours ago [-]
> Why did Java fail though
Um, Java has dominated enterprise computing, where the money is, for 25+ years.
There's no money in browser runtimes. They're built mostly defensively, i.e., to permit ads or to prohibit access to the rest of the machine.
> why would wasm succeed when the underlying philosophy is the same?
wasm is expressly not a source language; people use C or rust or Swift to write it. It's used when people want to move compute to the browser, to save server resources or move code to data instead of data to server. Thus far, it hasn't been used for much UI, i.e., to replace Javascript.
Java/Oracle spent a lot of money to support other JVM languages, including Kotlin, Scala, Clojure - for similar reasons, but also without trying to replace Javascript, which is a loss leader for Google.
noosphr 12 hours ago [-]
High level and opinionated. The jvm bytecode is much closer to wasm than Java ever was.
adastra22 12 hours ago [-]
To expand on this, WASM is closer to LLVM bytecode vs Java VM which is really designed around what the Java programming language needs.
thaumasiotes 12 hours ago [-]
I asked a number of times on HN why wasm was good when java applets, exactly the same thing, were bad. There was a vague feeling that java applets were insecure and that this would somehow not be an issue for wasm.
It's not just applets; we also had Flash, which was a huge success until it was suddenly killed.
As far as I can tell, the difference between java applets and Flash is that you, the user, have to install java onto your system to use applets, whereas to use Flash you have to install Flash into your browser. I guess that might explain why one became more popular than the other.
detaro 12 hours ago [-]
I don't think "exactly the same thing" is accurate. And WASM has put more effort into sandboxing, both in the design (very limited interfaces outside the sandbox) and implementations (partially because we've just gotten a lot better at that as an industry).
croes 11 hours ago [-]
But now you can do more in the browser than back then with Java applets.
Crypto miners weren’t a thing for Java applets
detaro 9 hours ago [-]
Only because Java Applets died before crypto mining became a thing/got turned into a "click to enable" thing because security problems.
croes 4 hours ago [-]
That’s my point. It’s Java applets all over again but now with crypto miners
whywhywhywhy 4 hours ago [-]
WASM doesn’t feel like an “applet” and can be either seamlessly integrated or take over the space.
Applets felt horrible, maybe if they appeared today it would be different but back then the machines were not powerful enough and the system not integrated enough to make it feel smooth.
immibis 11 hours ago [-]
There wasn't a vague feeling. Both kept getting exploited. My favorite is Trusted Method Chaining, which is hard to find a reference on now, but showed the whole Java security model was fundamentally flawed. These days that security model has simply been removed: all code in the VM is assumed to run in the privilege level of the VM.
WASM sandboxes the entire VM, a safer model. Java ran trusted and untrusted code in the same VM.
Flash, while using the whole-VM confinement model, simply had too many "boring" exploits, like buffer overflows and so on, and was too much of a risk to keep using. While technically nothing prevented Flash from being safe, it was copyright Adobe and Adobe didn't make it safe, and no one else was allowed to.
brazzy 11 hours ago [-]
> There was a vague feeling that java applets were insecure and that this would somehow not be an issue for wasm.
Nothing "vague" or "somehow" about that.
Applets were insecure, because A) they were based on the Netscape browser plugin API, which had a huge attack surface, and B) they ran in a normal JVM with a standard API that offeres full system access, restricted by a complex sandbox mechanism which again had a huge attack surface.
This IS, in fact, not an issue for wasm, since A) as TFA describes it has by default no access at all to the JavaScript browser API and has to be granted that access explicitly for each function, and B) the JavaScript browser API has extremely restricted access to OS functionality to begin with. There simply is no API at all to access arbitrary files, for example.
diffuse_l 12 hours ago [-]
Flash was a security nightmare, with multiple vulnerabilities discovered regularly.
It was eventually killed because Apple decided it won't support it on the iPhone.
shagmin 5 hours ago [-]
And conveniently around the same time javascript was rapidly evolving.
euroderf 4 hours ago [-]
Off-topic but... whatever happened to Real Player? Installing that was always a nightmare.
adastra22 12 hours ago [-]
They are vastly different. The WASM is much more low level, and supports a wider range of program types, including low level embedded stuff.
nikanj 11 hours ago [-]
Every few months Sun / Oracle would release a new update requiring new incantations in the manifest files. If you didn't constantly release patched versions of your applet, your software stopped working.
Javascript from 20 years ago tends to run just fine in a contemporary browser.
DonHopkins 5 hours ago [-]
Oracle bought Sun and now owns Java, remember? All technical discussions about Java -vs- any other languages can now be immediately terminated by mentioning the word "Lawnmower", which overrides all technical issues.
Java didn't fail, it was replaced by the web to a large part, but remains strong not only as the original JVM, but the Android clone and .NET. As for why the clones... IP issues. Otherwise Java would dominate, like the web does.
lbotos 12 hours ago [-]
Curious: what are the cases you want that a currently blocked?
I’ve personally felt like it has been progressing, but I’m hoping you can expand my understanding!
3cats-in-a-coat 12 hours ago [-]
Give me one thing that your theoretical WASM can "revolutionize". Aside from more efficient covert crypto mining on shady sites.
tlb 12 hours ago [-]
I use it for a web version of some robotics simulation & visualization software I wrote in C++. It normally runs on as an app on Mac or Linux, but compiling to WASM lets me show public interactive demos.
Before WASM, the options were:
- require everyone to install an app to see visualizations
- just show canned videos of visualizations
- write and maintain a parallel Javascript version
Doesn't work, neither with Safari nor with Chrome, at least not on macOS Monterey. I guess the whole stack is too modern for my 18-core Intel iMac Pro.
High performance web-based applications is pretty high on my list.
Low memory usage and low CPU demand may not be a requirement for all websites because most are simple, but there are plenty of cases where JavaScript/TypeScript is objectively the wrong language to be using.
Banking apps, social network sites, chat apps, spreadsheets, word processors, image processors, jira, youtube, etc
Something as simple as multithreading is enough to take an experience from "treading water" to "runs flawlessly on an 8 year old mobile device". Accurate data types are also very valuable for finance applications.
Another use case is sharing types between the front and back end.
unrealhoang 10 hours ago [-]
People are already appreciate the accessibility to low level native libraries like duckdb, sqlite, imagemagick, ffmpeg… allowed by wasm. Or high performance games/canvas based applications (figma).
But CRUD developers don’t know/care about those, I guess.
tpm 11 hours ago [-]
With access to DOM it could run with no (or just very little) js, no ts-to-js transpiler, no web-framework-of-the-month wobbly frontends perpetually reinventing the wheel. One could use a sane language for the frontend. That would be quite the revolution.
xg15 6 hours ago [-]
Is there any data on the performance cost of JS/WASM context switches? The way the architecture is described, it sounds as if the costs could be substantial, but the approaches described in the article basically hand them out like candy.
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
markdog12 5 hours ago [-]
Yeah, it's very unfortunate for WebGL/WebGPU apps, where every call has to pass/convert typed arrays and issue a js gl call. It pretty much kills any advantage of using WASM. Hope that changes.
breve 5 hours ago [-]
WebAssembly is the faster option for WebGL applications on the web, not least because you also might want to run things like physics engines:
How can you reconcile this with all of the AAA games that have been shown to work well on Wasm+WebGL ? What is different between your usage and theirs?
snapcaster 4 hours ago [-]
A quick search shows that there aren't any AAA games that support those platforms. What are you referring to?
Not entirely sure, but C#'s Blazor is amazing. I can stick to purely C# code, front-end and back-end, we rarely call out to JS unless its for like file uploading dialogs. I don't want to ever touch JavaScript again after this workflow.
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
fidotron 4 hours ago [-]
This problem is how you spot people that have tried to do it vs those that just talk about it. Everyone ends up with batching calls back and forth because the cost is so high.
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
replyifuagree 28 minutes ago [-]
I wonder how better DOM support would help a rust web application frontend framework like Leptos (https://leptos.dev)? Maybe a smaller WASM payload?
jauntywundrkind 16 hours ago [-]
Law of question marks on headlines holds here: no / never seems to be the answer.
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
weinzierl 15 hours ago [-]
"Definitely some feeling of being rug-pulled in the shift here."
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
flohofwoe 13 hours ago [-]
> that they are content with WASM being a second class citizen in the web world
Tbh, most of the ideas so far to enable more direct access of Javascript APIs from WASM have a good chance of ruining WASM with pointless complexity.
Keeping those two worlds separate, but making sure that 'raw' calls between WASM and JS are as fast as they can be (which they are) is really the best longterm solution.
I think what people need to understand is that the idea of having 'pure' WASM browser applications which don't involve a single line of Javascript is a pipe dream. There will always be some sort of JS glue code, it might be generated and you don't need to directly deal with it, but it will still be there, and that's simply because web APIs are first and foremost designed for usage from Javascript.
Some web APIs have started to 'appease' WASM more by adding 'garbage free' function overloads, which IMHO is a good thing because it may help to reduce overhead on the JS side, but this takes time and effort to be implemented in all browser (and most importantly, a will by mostly "JS-centric" web people to add such helper functions which mostly only benefit WASM).
sidewndr46 7 hours ago [-]
I'm always baffled by the crowd that suggests "Just use Javascript to interface it to the DOM!". If that's the outcome of using WASM, couldn't I just write Javascript?
fabrice_d 5 hours ago [-]
Indeed, you should. I haven't found a Rust UI project that compiles to Wasm and has good ergonomics; they all seem to make the mistake of being frameworks that control the whole app lifecycle and reinvent either markup or cumbersome ways to build your UI.
What would be nice is to use Wasm for component libraries instead, or for progressive enhancement (eg. add sophisticated autocomplete support to an input field).
Muromec 5 hours ago [-]
Maybe you should just write javascript. What's wrong with that.
zihotki 12 hours ago [-]
One could say second class, another could say that's a good separation of concerns. Having direct access would lead to additional security issues and considerations.
I wish it was possible to disable WASM in browsers.
flohofwoe 10 hours ago [-]
> I wish it was possible to disable WASM in browsers.
In Firefox at least: navigate to about:config and then `javascript.options.wasm => false` seems to do the job.
This causes any access to the global WebAssembly object to fail with `WebAssembly is not defined` (e.g. it won't be possible to instantiate wasm blobs).
hoodchatham 15 hours ago [-]
Reference types makes wasm/js interoperability way cleaner and easier. wasm-gc added a way to test a function pointer for whether it will trap or not.
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
jauntywundrkind 4 hours ago [-]
I'd really love a deep dive on what reference types enable and what limitations they have. Why are reference types not an end-all be-all "When is WebAssembly Going to Get DOM Support?" 'we have them now' answer?
dfabulich 42 minutes ago [-]
I think the easiest way to explain reference types is to contrast it with how we used to do it.
WASM strings aren't JS strings; they're byte arrays. (WASM only knows about bytes, numbers (integers/floats), arrays, functions, and modules.)
In the old days, to pass a JS string to WASM, you'd first have to serialize the JS string to a byte array (with the JS TextEncoder API, usually), and then copy the byte array into WASM, byte by byte. That took two O(n) steps, one to serialize to a byte array, and another to copy the byte array.
Well, now, you can serialize the JS string to a byte array and then transmit it by reference to WASM, saving you a copy step. You still have one O(n) step to serialize the string to a byte array, but at least you only have to do it once, amirite?
If you want your WASM to call `document.createElement("div")`, you can pass `document` and `createElement` by reference from JS to WASM, then have WASM create an array `['d', 'i', 'v']`, and send all of those back to JS, where the JS will convert the array back into a JS string and then call `createElement.call(document, "div")`.
It's better, certainly, but it's never going to be as fast as just calling `document.createElement("div")` in JS, not as long as `createElement` requires a JS string instead of a byte array.
The proper fix would be to define a whole new "low-level DOM API", which would work exclusively with byte arrays.
That's what we're probably never going to get, because it would require all of the browser vendors (Apple, Google, Microsoft, and Mozilla) to standardize on a new thing, in the hopes that it was fast enough to be worth their trouble.
Today, they don't even want to discuss it; they think their time is better spent making existing web apps faster than making a new thing from scratch that ought to be faster, if only a multi-year (decade??) effort comes to fruition.
CaptainFever 13 hours ago [-]
I'm worried that wide use of WASM is going to reduce the amount of abilities extensions have. Currently a lot of websites are basically source-available by default due to JS.
Fluorescence 11 hours ago [-]
With minimisers and obfuscators I don't see wasm adding to the problem.
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
hinkley 7 hours ago [-]
I’ve been trying to fix UI bugs in Grafana and “randomized garbage” is real. Is that a general React thing or just something the crazy people do? Jesus fucking Christ.
Fluorescence 6 hours ago [-]
I assume it was first as anti-scraping / anti-adblock measures but then frameworks with styled components spread it even further.
Remember when the trend was "semantic class names" and folk would bikeshed the most meaningful easy to understand naming schemes?
How we have fallen.
IshKebab 12 hours ago [-]
> Currently a lot of websites are basically source-available by default due to JS.
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
theSherwood 14 hours ago [-]
I want DOM access from WASM, but I don't want WASM to have to rely on UTF-16 to do it (DOMString is a 16-bit encoding). We already have the js-string-builtins proposal which ties WASM a little closer to 16-bit string encodings and I'd rather not see any more moves in that direction. So I'd prefer to see an additional DOM interface of DOMString8 (8-bit encoding) before providing WASM access to DOM apis. But I suspect the interest in that development is low.
flohofwoe 14 hours ago [-]
Tbh I would be surprised if converting between UTF-8 and JS strings is the performance bottleneck when calling into JS code snippets which manipulate the DOM.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
hardwaresofton 13 hours ago [-]
Disclaimer: I work on Jco, one of the user-facing Bytecode Alliance WASM JS ecosystem projects
Just a note, but there is burgeoning support for this in "modern" WebAssembly:
Support is far from perfect, but we're moving towards a much more extensible and generic way to support interacting with the DOM from WebAssembly -- and we're doing it via the Component Model and WebAssembly Interface Types (WIT) (the "modern" in "modern" WebAssembly).
What's stopping us the most from being very effective in browsers is the still-experimental browser shim for components in Jco specifically. This honestly shouldn't be blocking us at this point but... It's just that no one has gotten around to improving and refactoring the bindings.
That said, the support for DOM stuff is ready now (you could use those WIT interfaces and build DOM manipulating programs in Rust or TinyGo or C/C++, for example).
P.S. If you're confused about what a "component" is or what "modern" WebAssembly means, start here:
I have used Jco quite a bit (and contributed a few times) to build out simple utilities binding Rust code to JS [1][2]. I think it is great and the Component Model is the most exciting step towards real useful polyglot libraries I have seen in years. I wish it were better publicized, but I understand keeping things lowkey until it is more fleshed out (the async and stream support coming in Preview 3 are the real missing pieces for my usecases).
> (the async and stream support coming in Preview 3 are the real missing pieces for my usecases).
Currently this is a huge focus of most of the people working on stuff, and Jco is one of the implementations that needs to be done before P3 can ship, so we're hard at work on it.
> exciting step towards real useful polyglot libraries I have seen in years
I certainly agree (I'm biased) -- I think it's going to be a kind of tech that is new for a little bit and then absolutely everywhere and mostly boring. I think the docker arc is almost guaranteed to happen again, essentially.
The architectural underpinnings, implementation, and possibilities unlocked by this wave of Wasm is amazing -- truly awesome stuff many years in the making thanks to many dedicated contributors.
dfabulich 3 hours ago [-]
I think it's a huge exaggeration to say "the support for DOM stuff is ready now."
When writing most non-web software, you can usually write it quickly in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more developer effort) by writing your code in a lower-level language like C or Rust.
What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
That is not ready now. It's not even close. It might literally never happen.
What is ready now is a demo project where you can write WASM code against a DOM-like API running in Node.js.
What you have is an interesting demo, but that's not what we mean when we ask when WASM will get "DOM support."
hardwaresofton 3 hours ago [-]
> What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
Could you expand a bit on waht you expect would make this possible? What would you list as the most important blockers right now stopping people from getting there in your mind?
dfabulich 2 hours ago [-]
Honestly, I can't believe you're asking me this! You're so far in the weeds of the WASM component model that you have no idea what the problem is. It's really silly to ask what the "most important blockers" are, as if it's a list of bugs that can be fixed.
But, sure, in good faith, here's the problem.
Today, if you take a UI-focused DOM-heavy web app that makes lots and lots of DOM API calls (i.e. most JS web apps ever written) and try to rewrite it in Rust, you'll have to cross the boundary between JS and WASM over and over again, every time you use a DOM API. Every time you add/remove/update an element, or its styles, or handle a click event, you'll cross the boundary.
The boundary is slow because every time you touch a JS string (all CSS styles are strings!), you'll have to serialize it (in JS) into a byte array and send it into WASM land, do your WASM work, transfer back a byte array, and deserialize it into a JS string. (In the bad old days you had to copy the byte array in and out to do any I/O, but at least we have reference types now.)
And it's not just strings. All JS objects that you need to do actual work with have to be serialized/deserialized in this way, because WASM only knows about bytes, arrays, and opaque pointers. DOM elements/attributes, DOM style properties, DOM events (click events, keyboard events, etc.), all of them get slow when you transfer them in and out of WASM land, even with reference types, because of serialization/deserialization.
WASM interface types will make it easier to call JS from WASM, but as long as you're still calling JS in the end, rewriting in Rust will never make a DOM-heavy web app faster than writing it in JS.
That's why this sucks! Rewriting a Node.js app in Rust (or Go or Zig, etc.) normally yields huge performance gains (at huge developer effort), but rewriting a JS DOM-heavy web app in Rust just slaps Rust on top of JS; it usually makes it slower.
The only fix, as Daniel's article explains, would be to standardize a low-level DOM API, one that didn't assume that you can use JS strings, objects+properties, exceptions, promises, etc. This would be an unimaginably large standardization project.
You couldn't use WebIDL at all; you'd need to start by defining a new "low-level WebIDL." Then, you'd start standardizing the entire DOM API, all over again (or at least the most important parts) in low-level WebIDL, and then browser vendors could start implementing the low-level DOM API.
Then WASM could start calling that API directly. And maybe then you could rewrite web apps in Rust and have them get faster.
Until then, WASM is only faster for CPU-intensive tasks with I/O at the beginning/end, and otherwise it's only good for legacy code, where you don't have time to make it faster by rewriting it in JS.
(It should sound insane to anyone that taking C++ and rewriting it in JS would make it faster, but that's how it is on the web, because of this WASM boundary issue.)
So, what's the most important blocker? (gesture toward the universe) All of it??
valorzard 6 hours ago [-]
You should write an article about this stuff and post it here since this is the first time I’m hearing about all of this
MisterTea 10 hours ago [-]
I am confused by this. If WASM is a VM then why would it understand the DOM? To me it akin to asking "When will Arm get DOM support?" Seems like the answer is "When someone writes the code that runs on WASM that interacts with the DOM." Am I missing something? (not a web dev.)
jffry 10 hours ago [-]
The WASM VM doesn't have any (direct) access to the DOM, so there's no code you can write in it that would affect the DOM.
There's a way to make JS functions callable by WASM, and that's how people build a bridge from WASM to the DOM, but it involves extra overhead versus some theoretical direct access.
jkcxn 6 hours ago [-]
That's like saying WASM doesn't have a direct way to allocate memory or print to the console. Of course it doesn't, it doesn't have access to anything, that's the whole point.
xg15 6 hours ago [-]
I think a good analogy would be between user-space and kernel-space code.
It's just weird that by this logic, JavaScript - the more high-level, less typesafe and less performant language - would be the kernel, while performance-optimized WASM code would be the userspace program.
MisterTea 9 hours ago [-]
Thanks for the clarification. So if I understand correctly - when using WASM you interface to web things through JS forcing the user to always need JS in the stack when e.g. they may want to just use Rust or Go. My first thought would be modules that are akin to a syscall interface to a DOM "device" exposed by the VM.
lucideer 13 hours ago [-]
I don't think I want WebAssembly to have DOM support.
Would it be nice? Yes. But.
Every added feature is a trade-off between need -vs- outlay, overhead, complexity & other drawbacks. In order to justify the latter things, that "need" must be significant enough. I'd like to have DOM, but I don't feel the need is significant.
Some thoughts on use-cases:
1. "Inactive" or "in-instance" DOM APIs for string parsing, document creation, in-memory node manipulation, serialisation: this is all possible today in WASM with libraries. Having it native might be cool but it's not going to be a significantly different experience. The benefits are marginal here.
2. "Live / active" or "in-main-thread" direct access APIs to manipulate rendered web documents from a WASM instance - this is where the implementation details get extremely complex & the security surface area starts to really widen. While the use-cases here might be a bit more magical than in (1), the trade-offs are much more severe. Even outside of security, the prospect of WASM code "accidently" triggering paints, or slow / blocking main thread code hooked on DOMMutation events is a potential nightmare. Trade-offs definitely not worth it here.
Besides, if you really want to achieve (2), writing an abstraction to link main-thread DOM APIs to WASM postMessage calls isn't a big lift & serves every reasonable use-case I can think of.
gchamonlive 12 hours ago [-]
I have zero experience with anything wasm, just regular old DOM with typescript, but I wonder if this is the kind of problem that could be addressed the same way that phoenix liveview addesses frontend updates, by message passing only with diff changes, and delegate the Dom manipulation to what works, effective modelling the wasm runtime as an actor.
Muromec 4 hours ago [-]
It's one of the ways one can write the glue between WASM and JS. The actual problem is, like everything in javascript, there is no standard glue, no ABI, no expectation of what memory representation of a list or even a string looks like.
You can even compile elixir into wasm-fx and run actor model, it's super fun and mad, but what you can't do is not deal with the technicalities.
So either you buy into one of the frameworks that are (not) built on top of wasm and lock into their paradigm or roll your own, because wasm proper doesn't even have any abstractions above numbers on a stack.
pton_xd 4 hours ago [-]
Why does WASM need to manipulate the DOM when JS already excels at that? Interfacing with JS was never really an issue; yes you do have to design reasonable module boundaries and understand how data is going to be shared. That just leads to simpler / stronger program design.
If you're writing a DOM UI heavy app, use JavaScript. Many WASM apps, like games, have no interest in the DOM. It's just more spec bloat.
cookiengineer 5 hours ago [-]
Actually my journey was quite similar. I started to build a bindings and web components framework in Pure Go so that I can build user interfaces with webview/webview.
My apps just go:embed all their assets and spawn a local webview as their UI, which is quite nice because client and server use the same schemas and same validations for e.g. web forms and the fetch/REST APIs.
Server-side-rendered components are implemented using a web components graph whose components can be String()ified into HTML.
It's a bit Experimental though, and the API in the components graph might change in the future:
Yeah, the concurrency article is quite excellent as well. I definitely recommend it!
I also don't think it has been posted here, so feel free to do so.
jokoon 4 hours ago [-]
Blame browsers.
Things like QT and browsers became popular because people realize they could short circuit OS vendors asking developer to be loyal to them. The glue won.
But QT and browsers and JS are just hotfixes, they're not sound technologies, they're just glue.
Havoc 13 hours ago [-]
> Wasm includes various JavaScript APIs that allow compiler-generated glue code
One of the reasons I’m interested in wasm is to get away from the haphazardly evolved JS ecosystem…
nikeee 13 hours ago [-]
I wouldn't be that sure if we'd want to exchange that for an ecosystem that consists of many different languages that all have their own ecosystem wich may not be compatible with WASM.
Imagine needing nuget, maven and cargo -- all in their specific version -- to build a project.
Muromec 3 hours ago [-]
You get more of that with wasm actually. You get the rust zoo, elixir zoo, gleam, three toolchains for c or what not. And for each of them you need a bunch of fine autogenerated javascript glue to sniff, because those people can't agree on what the string looks like in memory.
edg5000 14 hours ago [-]
Has anybody written a nice DOM wrapper for C++/Rust? So you can do everything from the comforts of the C++/Rust application? The API whould match JavaScript APIs as much as possible.
the_duke 14 hours ago [-]
Sure, in Rust the web-sys [1] crate provides auto-generated bindings for pretty much all the browser APIs.
Has been used by most of the Rust web frontend frameworks for years.
It all has to go through JS shims though, limiting the performance potential.
I am adding a WASI runtime (0.1 and 0.2) in https://exaequos.com, an OS running in the web browser. What would be a good DOM api ?
jagged-chisel 13 hours ago [-]
Interop with JS APIs is a requirement. Therefore, glue code is a requirement. If we’re not going to get direct access to the DOM, can we at least get the ability to just list the JS functions that our wasm will call? Of course the list will be bundled with (compiled into) the wasm blob, and whether it’s literally a text list or something like a registration call naming those JS functions, I am agnostic about. Everyone having to write all their own glue[*] is just nuts at this point.
[*]Yeah, the toolchains help solve this a bit, but it still makes me ship JS and wasm side-by-side.
xscott 13 hours ago [-]
> If we’re not going to get direct access to the DOM, can we at least get the ability to just list the JS functions that our wasm will call?
You mean like a list of JS functions that are imported into the Wasm binary? This has been there since day one:
> Everyone having to write all their own glue[*] is just nuts at this point.
Did you mean for the specific programming language you use? If so then that seems like a problem for the language implementor, not a problem with Wasm. Rust has wasm bindgen, Emscripten has their thing, and so on.
amelius 10 hours ago [-]
At this point, why not implement DOM inside WebAssembly? The end of DOM compatibility issues ...
Surac 13 hours ago [-]
i miss the times where computers where used to solve problem. Nowerdays people MAKE there problems themself by layering thousands of apis over each other and calling back and forth milions of times. no wonder computer are not getting faster.
criticalfault 14 hours ago [-]
I've been reading about this for a while. Will it. Won't it.
Does anybody know why is it such a big problem to add dom access to wasm?
In worst case, we should have a second option to Js (which is not typescript - typescript is just a lipstick on a pig). If wasm is not it, why not something different? Having dart in would be great.
flohofwoe 14 hours ago [-]
> Does anybody know why is it such a big problem to add dom access to wasm?
Well, the article does a pretty good job of answering this specific question ;)
wongarsu 13 hours ago [-]
Does it?
It gives good reasons why we can't have specific parts. Having the JavaScript Standard library in WebAssembly would be hard (was anyone actually asking for that?), and some of the modern APIs using promises or iterators wouldn't have a clear mapping. Also not everything could be zero-copy for every language
But the article doesn't do a very good job explaining why we can't have some dom access, at least for the 90% of DOM APIs not using JavaScript-specific features.
Most of the argument boils down to "you shouldn't want direct DOM access, because doing that would be work for these people, and we can instead have those other people do lots of work making the JavaScript bridge less painful. And anyways it's not clear if having these people make proper APIs would actually result in faster code than having those people do a separate sophisticated toolchain for each language"
It reads very much like a people and resource allocation problem rather than a technical challenge
flohofwoe 12 hours ago [-]
> at least for the 90% of DOM APIs not using JavaScript-specific features
The DOM is a Javascript API, so it uses 100% Javascript-specific features (every DOM manipulation requires accessing JS objects and their properties and lots of those properties and function args are Javascript strings) - none of those map trivially to WASM concepts.
It's a bit like like asking why x86 assembly code doesn't allow "C++ stdlib access", the question doesn't even make much sense ;)
wongarsu 12 hours ago [-]
I wouldn't call having objects and properties Javascript-specific. The article details how they were initially written with support by both Java and JavaScript in mind. Now WASM isn't object oriented like Java or JavaScript, but the concept of objects maps very cleanly to the concept of structs, member functions to simple functions that get the struct as first parameter (optionally with an indirection through a virtual function table) and properties to getter/setter functions. I suppose you would have to specify a struct memory layout for the purpose of a WASM DOM API, which may or may not be zero-copy for any given language.
Or is there something in the browser architecture that requires them to be JavaScript objects with the memory layout of the JavaScript engine, rather than just conceptually being objects?
flohofwoe 10 hours ago [-]
Sure an automatically generated JS shim which marshalls between a web API and the WASM module interface is possible, but that's still not direct DOM access, it still goes through a JS shim. The detail that this JS shim was automatically generated from an IDL instead of manually written is really not that important when the question is "does WASM allow direct DOM access", the simple answer is "no".
The underlying problem is that you need to translate between a Javascript-idiomatic API and whatever is an idiomatic API in the language you compile into WASM, and idiomatic C, C++, Rust, ... APIs all look very different so there isn't a single answer. It's not WASM being relevant here, but the high level source language that's compiled into WASM, and how well the DOM JS API (and the Javascript concept it is built on) map to APIs for various 'authoring languages'.
The whole problem is really quite similar to connecting two higher level languages (e.g. Rust and C++) through a C API (since this is the common interface that both Rust and C++ can talk to - but that doesn't mean that you can simply send a Rust String to the C++ side and expect that you automatically get a std::string - this sort of magic needs to be explicitly implemented in a shim layer sitting between the two worlds) - and to use the string example, there is no such thing as a WASM string, instead how string data is represented on the WASM side depends on the source language that's compiled to WASM.
wongarsu 10 hours ago [-]
If you push the shim to JavaScript then it isn't direct DOM access. But you could push the shim into the C++ browser code, providing one DOM API to JavaScript and slightly smaller shimmed version to WASM. Then you cut out the JavaScript middle-man and can call it "direct DOM access".
Of course you couldn't provide idiomatic versions for every language, but the JS shims also can't really do that. Providing something close to idiomatic C would be a huge step up, the language libraries can then either offer a C-like API or choose to build new abstractions on top of it
flohofwoe 10 hours ago [-]
> But you could push the shim into the C++ browser code
That's easier said than done because of details like that you can't build a JS string on the C++ side, the translation from string data on the WASM heap into a JS object needs to happen on the JS side.
But this is how all the Emscripten web API shims work, and they do this quite efficiently, and some of those shims also split their work between the JS and C/C++ side.
So to the programmer it does look like with Emscripten there's direct access to the (for instance) WebGL or WebGPU APIs, but in reality there's still quite a bit of JS code involved for each call (which isn't a problem really as long as no expensive marshalling needs to happen in such a call, since the call overhead from WASM into JS alone is really minimal).
Fluorescence 10 hours ago [-]
IANABE (not a browser engineer)
On the one hand, JS DOM objects are idl generated wrappers for C++. In theory we can generate more WASM friendly wrappers.
On the other, the C++ code implementing the API will be tightly coupled to the entire JS type system and runtime. Not just the concept of an object but every single design decision from primitives to generators to dynamic types to prototypical inheritance to error handling...
Also, I believe the C++ DOM implementation itself is pretty tightly integrated with javascript and it's memory management e.g. nodes have references into the managed heap to use JS objects directly like EventListeners and js functions.
Creating a new non-JS DOM API doesn't sound intractable to me... but browsers annihilate my assumptions so it's probably millions of hours of effort and close to a rewrite...
criticalfault 9 hours ago [-]
But why would it need to interact with Js or having the same API?
Maybe I misunderstood, but isn't Dom access in essence an ability to change html tree? Since this is wasm, why would it need to reimplement js API and need type mappings? Couldn't it be something different?
Fluorescence 8 hours ago [-]
(still not a browser engineer - others will know better).
It doesn't need to be the same API... but implementing a new DOM API that doesn't meet the w3c standard is a bit on the nose. It's meant to be language independent hence the IDL.
Looking into it, the IDL might insulate the existing API implementation from JS to a greater degree than I assumed above. Apparently there might be horrors lurking in the binding generation code though. You can poke around in blink:
> isn't Dom access in essence an ability to change html tree
It might be more accurate to look at it the other way: our current DOM implementations were created to implement the "DOM API Standard". The standard dictated the types and how reading/mutation works.
> need type mappings
I can't imagine how it can avoid type mappings for things like e.g. creating unattached bits of DOM or binding callbacks that receive bits of DOM.
Personally I might be happy for a tiny WASM api... but then foresee 10 years of maddening omissions, security bugs and endless moaning because they didn't just implement the standard everyone already knew.
wongarsu 10 hours ago [-]
Those are great points. That's the angle I was missing from the article
dfabulich 5 hours ago [-]
I think this article is written in a very confusing way. I think I would have written it this way.
When writing most non-web software, you can usually write it easily in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more effort) by writing your code in a lower-level language.
WASM seems like an opportunity to get better performance by rewriting JavaScript web apps in lower-level languages like C or Rust, but it doesn't work that way, because of standardization.
When defining standardized APIs in WebIDL, WebIDL assumes that you can use JavaScript strings, JavaScript objects + properties, JavaScript Exceptions, JavaScript Promises, JavaScript garbage collection, and on and on and on. Almost all of the WebIDL specification is about the dozens of types that it assumes the platform already provides. https://webidl.spec.whatwg.org/
WASM doesn't have any of those things.
No one has ever standardized a DOM API for low-level languages. You'd need to start by standardizing a new "low-level API" for DOM access, and presumably a new low-level WebIDL to define those standards.
Designing the web by committee makes it hard to add/change stuff in browsers. You have to get Apple, Google, Microsoft, and Mozilla to agree on literally everything. (Defining WebIDL itself has taken decades!)
It can be hard to even get agreement from browser vendors to discuss the same topic, to even just get them to read your proposed standards document and to say "no, we won't implement it like this, because..." You have to convince them that the standard you're proposing is one of their top priorities. (And before you can do that, you have to convince them to pay attention to you at all.)
So, someone would have to persuade all of the browser vendors that one of their top priorities should be to invent a new way to standardize DOM APIs and begin the process of standardizing DOM access on top of a lower-level IDL.
Today, the browser vendors aren't convinced that this is worth their time. As the article says:
> For now, web folks don't seem to be sold on the urgency of this very large project. There is no active work by browser vendors in this direction.
And that's why you can't get top-notch performance by rewriting your web app in Rust. You can rewrite your web app in Rust, and it can access JS APIs, but when touching the DOM APIs, Rust has to interop with JS. Rust's interop with JS is no faster than JS itself (and it's often slower, because it requires added glue code, translating between JS and WASM).
As a result, if you're writing a web app, you mostly have to do it in JS. If you have some very CPU-intensive code, you can write that in WASM and slowly copy the result of your computation to JS, as long as you don't cross the boundary between WASM and JS too often.
Alternately, if you have existing code in non-JS languages, you can port it to web via WASM, but it'll probably run slower that way; the best performance improvement you can do is to rewrite it in JS!
5 hours ago [-]
naikrovek 6 hours ago [-]
The fact that this needs to be explained with mentions of pointers and data alignment and garbage collection tells me that the decisions that the web standards committees make continue to be just completely disconnected from anything sane.
maybe my read is wrong, but everything i look at today just screams to me that the web is extremely poorly designed; everything about it is simply wrong.
3cats-in-a-coat 12 hours ago [-]
DOM is so dynamic, any integration with DOM will look no different than the way we do it now through JS. JS is built to work naturally with DOM.
Maybe we should stop overdesigning things and keep it simple. WASM needs more tooling around primitive types, threading, and possibly a more flexible memory layout than what we have now.
DonHopkins 7 hours ago [-]
Emscripten has a handy tool called "Embind" for binding JavaScript/TypeScript and C/C++/whatever code. It's underappreciated and not well documented all in one place, but here is a soup-to-nuts summary.
Emscripten + Embind allow you to subclass and implement C++ interfaces in TypeScript, and easily call back and forth, even pass typed function pointers back and forth, using them to call C++ from TypeScript and TypeScript from C++!
I'm using it for the WASM version of Micropolis (open source SimCity). The idea is to be able to cleanly separate the C++ simulator from the JS/HTML/WebGL user interface, and also make plugin zones and robots (like the monster or tornado or train) by subclassing C++ interface and classes in type safe TypeScript!
emscripten.cpp binds the C++ classes and interfaces and structs to JavaScript using the magic plumbing in "#include <emscripten/bind.h>".
There is an art to coming up with an elegant interface at the right level of granularity that passes parameters efficiently (using zero-copy shared memory when possible, i.e. C++ SimCity Tiles <=> JS WebGL Buffers for the shader that draws the tiles) -- see the comments in the file about that):
/**
* @file emscripten.cpp
* @brief Emscripten bindings for Micropolis game engine.
*
* This file contains Emscripten bindings that allow the Micropolis
* (open-source version of SimCity) game engine to be used in a web
* environment. It utilizes Emscripten's Embind feature to expose C++
* classes, functions, enums, and data structures to JavaScript,
* enabling the Micropolis game engine to be controlled and interacted
* with through a web interface. This includes key functionalities
* such as simulation control, game state management, map
* manipulation, and event handling. The binding includes only
* essential elements for gameplay, omitting low-level rendering and
* platform-specific code.
*/
[...]
////////////////////////////////////////////////////////////////////////
// This file uses emscripten's embind to bind C++ classes,
// C structures, functions, enums, and contents into JavaScript,
// so you can even subclass C++ classes in JavaScript,
// for implementing plugins and user interfaces.
//
// Wrapping the entire Micropolis class from the Micropolis (open-source
// version of SimCity) code into Emscripten for JavaScript access is a
// large and complex task, mainly due to the size and complexity of the
// class. The class encompasses almost every aspect of the simulation,
// including map generation, simulation logic, user interface
// interactions, and more.
[...]
class_<Callback>("Callback")
.function("autoGoto", &Callback::autoGoto, allow_raw_pointers())
[...]
Here's the WebGL tile renderer that draws the tiles directly out of a Uint16Array pointing into WASM memory:
/**
* @file callback.h
* @brief Interface for callbacks in the Micropolis game engine.
*
* This file defines the Callback class, which serves as an interface
* for various callbacks used in the Micropolis game engine. These
* callbacks cover a wide range of functionalities including UI
* updates, game state changes, sound effects, simulation events, and
* more. The methods in this class are virtual and intended to be
* implemented by the game's frontend to interact with the user
* interface and handle game events.
*/
class Callback {
public:
virtual ~Callback() {}
virtual void autoGoto(Micropolis *micropolis, emscripten::val callbackVal, int x, int y, std::string message) = 0;
[...]
callback.cpp implements just the concrete ConsoleCallback interface in C++ with "EM_ASM_" glue to call out to JavaScript to simply log the parameters of each call:
/**
* @file callback.cpp
* @brief Implementation of the Callback interface for Micropolis game
* engine.
*
* This file provides the implementation of the Callback class defined
* in callback.h. It includes a series of methods that are called by
* the Micropolis game engine to interact with the user interface.
* These methods include functionalities like logging actions,
* updating game states, and responding to user actions. The use of
* EM_ASM macros indicates direct interaction with JavaScript, typical
* in a web environment using Emscripten.
*/
js_callback.h contains an implementation of the Callback interface that caches a "emscripten::val jsCallback" (an enscripten value reference to a JS object that implements the interface), and uses jsCallback.call to make calls to JavaScript:
Then you can import that TypeScript interface (using a weird "<reference path=" thing I don't quite understand but is necessary), and implement it in nice clean safe TypeScript:
This subject is interesting because your typical college educated developer HATES the DOM with extreme passion, because it’s entirely outside their area of comfort. The typical college educated developer is typically educated to program in something like Java, C#, or C++ and is how the world is supposed to work. The DOM doesn’t work like that. It’s a graph of nodes in the form of a tree model, and many developers find that to be scary shit. That’s why we have things like jQuery, Angular, and React.
These college educated developers also hate JavaScript for the same reasons. It doesn’t behave like Java. So for many developers the only value of WASM is as JavaScript replacement. WASM was never intended or positioned to be a JavaScript replacement so it doesn’t get used very often.
Think about how bloated and slow the web could become if WASM were a JavaScript replacement. Users would have to wait on all the run time and dependencies to download into the WASM sandbox and then open like a desktop application, but then all that would get wrapped in something like Angular or React because the DOM is still scary.
Rendered at 22:28:51 GMT+0000 (Coordinated Universal Time) with Vercel.
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
The whole WASM story is confusing to me.
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
I don't think this is entirely fair or accurate. This isn't how Wasm runtimes work. Making it possible for the sandbox to explicitly request specific resource access is not quite the same thing as what you're implying here.
> The browser is the main driving force for WASM, as I see it
This hasn't been the case for a while. In your first paragraph you yourself say that 'the people furthering WASM are [...] building a whole new VM ecosystem that the browser people aren't interested in' - if that's the case, how can the browser be the main driving force for Wasm? It's true, though, that there's verey little revenue in browser-based Wasm. There is revenue in enterprise compute.
> because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container
Not exactly true when you consider that docker containers are orders of magnitude bigger, slower to mirror and start up, require architecture specific binaries, are not great at actually 'containing' fallout from insecure code, supply chain vulns, etc.. The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial. They just rarely make it to the hn frontpage, because they really are boring.
However, the Wasm push in enterprise compute is real, and the value is real. But you're right that the ecosystem and its sponsorship is still struggling - in some part due to lack of support for the component model by the browser people. The component model support introduced in go 1.25 has been huge though, at least for the (imho bigger) enterprise compute use case, and the upcoming update to the component model (wasi p3) should make a ton of this stuff way more usable. So it's a really interesting time for Wasm.
What are you talking about? Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.
If a service has a multi-gig container, that is for other stuff than the Docker overhead itself, so would also be a multi-gig app for WASM too.
Also, Docker images get overlayed. So if I have many Go or Rust apps running on Alpine or Debian as simple static binaries, the 5MB/30MB base system only exists once. (Same as a wasmtime binary running multiple programs).
That's not the deployment model of wasm. You don't ship the runtime and the code in a container.
If you look at crun, it can detect if your container is wasm and run it automatically, without your container bundling the runtime. I don't know what crun does, but in wasmcloud for example, you're running multiple different wasm applications atop the same wam runtime. https://github.com/containers/crun/blob/main/docs/wasm-wasi-...
If I have 20 apps that depend on a 300MB bundle of C++ libraries + ~10MB for each app, as long as the versions are the same, and I am halfway competent at writing containers, the storage usage won't be 20 * 310MB, but 300MB + 20 * 10MB.
Of course in practice each of the 20 different C++ apps will depend on a lot of random mutually exclusive stuff leading to huge sizes. But there's rarely any reason for 20 Go (or Rust) apps to base their containers on anything other than lean Alpine or Debian containers.
Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique.
"Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.
Assuming the memory usage of the wasm runtime (.data and .bss modifications + stack and heap usage) is vaguely k + sum(p_i) where p_i is some value associated with process i, then running a single runtime instead of running n runtimes saves (n - 1) * k memory. The question then becomes how much is k. If k is small (a couple megs), then there really isn't any significant advantage to it, unless you're running an order of magnitude more wasm processes than you would traditional containers. Or, in other words if p_i is typically small. Or, in other other words, if p_i/k is small.
If p_i/k is large (if your programs have a significant size), wasi provides no significant size advantage, on disk or in memory, over just running the wasm runtime in a traditional container. Maybe there are other advantages, but size isn't one of them.
Obviously an unoptimized C++/Python stack that depends on a billion .so's (specific versions only) and pip packages is going to waste space. The advantage of containers for these apps is that it can "contain" the problem, without having to rewrite them.
The "modern" languages: Go and Rust produce apps that depend either only on glibc (Rust) or on nothing at all (Rust w/ musl and Go). You can plop these binaries on any Linux system and they will "just work" (provided the kernel isn't ancient). Sure, the binaries can be fat, but it's a few dozen megabytes at the worst. This is not an issue as long as you architect around it (prefer busybox-style everything-in-a-binary to coreutils-style many-binaries).
Moreover, a VM isn't much necessary, as these programming languages can be easily cross-compiled (especially Go, for which I have the most experience). Compared to C/C++ where cross-compiling is a massive pain which led to Java and it's VM dominating because it made cross-compilation unnecessary, I can run `GOOS=windows GOARCH=arm64 go build` and build a native windows arm64 binary from x86-64 Linux with nothing but the standard Go compiler.
The advantage of containers for Rust and Go lies in orchestration and separation of filesystem, user, ipc etc. namespaces. Especially orchestration in a distributed (cluster) environment. These containers need nothing more than the Alpine environment, configs, static data and the binary to run.
I fail to see what problem WASM is trying to solve in this space.
The problem with that idea is docker isn’t as secure as wasm is. That’s one big difference: wasm is designed for security in ways that docker is not.
The other big difference is that wasm is in-process, which theoretically should reduce overhead of switching between multiple separate running softwares.
https://en.wikipedia.org/wiki/List_of_Java_virtual_machines
And looking at other bytecode based systems, enough runtimes with multiple vendors.
https://en.wikipedia.org/wiki/Bytecode
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
https://github.com/ealmloff/sledgehammer_bindgen
How would you make such a thing without limiting it in some such way?
> You need a lot of inherently slow and unsafe glue code to make anything work.
Idea being that with dom support you’d need less unsafe glue code.
Of course I was being glib but it is the point of TFA after all.
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
Think of it as a backend and not as library and it clicks.
That describes much of modern computing.
Trying to shoehorn Rust as a web scripting language was your second mistake
Your first mistake was to mix Rust, TypeScript and JavaScript only just to add logic to your HTML buttons
I swear, things get worse every day on this planet
> It gets us closer to the web as the universal platform.
As a target
I don't want a pseudo 'universal' platform owned by Big Tech; or by governments as a substitute
Google/Chrome controlled platform, no thanks
https://i.imgur.com/WfXEKSf.jpeg
If the logic is merely about validation, then an IDL with codegen for TS and some backend language is probably better. There are also some more advanced languages targeting transpilation to both JS and a backend language such as Haxe, but they all have some trade-offs.
Dioxus is another: https://dioxuslabs.com/
C# with Avalonia for a different use case: https://avaloniaui.net/
Avalonia solitaire demo: https://solitaire.xaml.live/
Avalonia Visual Basic 6 clone: https://bandysc.github.io/AvaloniaVisualBasic6/
Blazor can run as WebAssembly on the client side if you choose that runtime mode: https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...
Beyond the browser, Wasmer does WebAssembly on the serverside: https://wasmer.io/
Fermyon too: https://www.fermyon.com/
Extism is a framework for an application to support WebAssembly plugins: https://extism.org/
Dioxus : React :: Leptos : SolidJS
The key for me is that Leptos leans into a JSX-like templating syntax as opposed to Dioxus's H-like function calls. So, Leptos is a bit more readable in my opinion, but that probably stems from my web dev background.
The Dioxus README has a whole section comparing them -- https://github.com/DioxusLabs/dioxus#dioxus-vs-leptos
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
Not to mention js engines are very complicated.
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
Generalized Assembly? GASM?
Java applets loading on a website started as a gray rectangle, which loaded very slowly, and sometimes failed to initialize with an "uninited" error. Whenever you opened a website with a java applet (like could happen with some math or physics related ones), you'd go "sigh" as your browser's UI thread itself halted for a while
Flash applets loading on a website started as a black rectangle, did not cause the UI thread to halt, loaded fast, and rarely gave an error
(the only reason I mention the gray vs black rectangle is because seeing a gray rectangle on a website made me go "sigh")
JavaScript was not yet optimized but the simple JS things that worked, did work without loading time.
Runescape (a 3d MMORPG from the early 2000s that still exists) used Java though and somehow they managed to use it properly since that one never failed to load and didn't halt the browser's UI either despite being way more complex than any math/physics Java applet demo. So if Java forced their applets to do whatever Runescape was doing so correctly, they'd not have had this perception issue...
Until it didn't.
https://en.wikipedia.org/wiki/NPAPI
Problems Found with the NetScape Plug-in API. By Don Hopkins, Kaleida Labs:
https://donhopkins.com/home/archive/netscape/Netscape-Plugin...
More about Netscape's fleeting obsession with Java and Javagator in the pre-LiveConnect/XPConnect/NPRuntime/ActiveX/DHTML/XPCOM/XUL days:
https://news.ycombinator.com/item?id=22708076
>I hope NetScape can come up with a plug-in interface that is good enough that they can implement their own navigator components with it (like the mail reader, outliner, progressive jpeg viewer, etc). The only way it's going to go anywhere is if they work closely with developers, and use the plug-in interface for non-trivial things themselves. Microsoft already has a VRML plug-in for their navigator, so presumably they have a plug-in interface, and from what I've seen on their web site, it may not be "good enough", but it's probably going to do a lot more that you can do with NetScape right now, since they're exposing a lot of their navigator's functionality through OLE. They seem to understand that there's a much bigger picture, and that the problems aren't trivial. Java isn't going to magically solve all those problems, folks.
Early Browser Extension Wars of 1996:
https://news.ycombinator.com/item?id=19837817
>Wow, a blast from the past! 1996, what a year that was.
>Sun was freaking out about Microsoft, and announced Java Beans as their vaporware "alternative" to ActiveX. JavaScript had just come onto the scene, then Netscape announced they were going to reimplement Navigator in Java, so they dove into the deep end and came up with IFC, which designed by NeXTStep programmers. A bunch of the original Java team left Sun and formed Marima, and developed the Castanet network push distribution system, and the Bongo user interface editor (like HyperCard for Java, calling the Java compiler incrementally to support dynamic script editing).
More about browser extension APIs:
https://news.ycombinator.com/item?id=27405137
>At the time that NSAPI came around, JavaScript wasn't really much of a thing, and DHTML didn't exist, so not many people would have seriously thought of actually writing practical browser extensions in it. JavaScript was first thought of more as a way to wire together plugins, not implement them. You were supposed to use Java for that. To that end, Netscape developed LiveConnect.
>Microsoft eventually came out with "ActiveX Behavior Components" aka "Dynamic HTML (DHTML) Behaviors" aka "HTML Components (HTCs)" that enabled you to implement ActiveX controls with COM interfaces in all their glory and splendor, entirely in Visual Basic Script, JavsScript, or any other language supporting the "IScriptingEngine" plug-in interface, plus some XML. So you could plug in any scripting language engine, then write plug-ins in that language! (Easier said than done, though: it involved tons of OLE/COM plumbing and dynamic data type wrangling. But there were scripting engines for many popular scripting languages, like Python.)
Javagator Down Not Out:
https://www.cnet.com/tech/tech-industry/javagator-down-not-o...
>Though Netscape has ceased development efforts on its Java-based browser, it may pass the baton to independent developers.
Shockwave (the Macromedia Director Player Library) came long before Flash, and it used NPAPI (and ActiveX on IE), but later on, Google developed another better plug-in interface called "Pepper" for Flash.
1995: Netscape releases NPAPI for Netscape Nagivator 2.0, Macromedia releases Shockwave Player on NPAPI for playing Director files
1996: Microsoft releases ActiveX, FutureWave releases FutureSplash Animator and NPAPI player for FutureSplash files, Macromedia acquires FutureSplash Animator and renames it Flash 1.0
2009: Google releases PPAPI (Pepper Plugin API) as part of the Native Client project, suddenly Flash runs much more smoothly
Oh, and breaking changes between versions meaning you needed multiple runtimes and still got weird issues in some cases.
Um, Java has dominated enterprise computing, where the money is, for 25+ years.
There's no money in browser runtimes. They're built mostly defensively, i.e., to permit ads or to prohibit access to the rest of the machine.
> why would wasm succeed when the underlying philosophy is the same?
wasm is expressly not a source language; people use C or rust or Swift to write it. It's used when people want to move compute to the browser, to save server resources or move code to data instead of data to server. Thus far, it hasn't been used for much UI, i.e., to replace Javascript.
Java/Oracle spent a lot of money to support other JVM languages, including Kotlin, Scala, Clojure - for similar reasons, but also without trying to replace Javascript, which is a loss leader for Google.
It's not just applets; we also had Flash, which was a huge success until it was suddenly killed.
As far as I can tell, the difference between java applets and Flash is that you, the user, have to install java onto your system to use applets, whereas to use Flash you have to install Flash into your browser. I guess that might explain why one became more popular than the other.
Crypto miners weren’t a thing for Java applets
Applets felt horrible, maybe if they appeared today it would be different but back then the machines were not powerful enough and the system not integrated enough to make it feel smooth.
WASM sandboxes the entire VM, a safer model. Java ran trusted and untrusted code in the same VM.
Flash, while using the whole-VM confinement model, simply had too many "boring" exploits, like buffer overflows and so on, and was too much of a risk to keep using. While technically nothing prevented Flash from being safe, it was copyright Adobe and Adobe didn't make it safe, and no one else was allowed to.
Nothing "vague" or "somehow" about that.
Applets were insecure, because A) they were based on the Netscape browser plugin API, which had a huge attack surface, and B) they ran in a normal JVM with a standard API that offeres full system access, restricted by a complex sandbox mechanism which again had a huge attack surface.
This IS, in fact, not an issue for wasm, since A) as TFA describes it has by default no access at all to the JavaScript browser API and has to be granted that access explicitly for each function, and B) the JavaScript browser API has extremely restricted access to OS functionality to begin with. There simply is no API at all to access arbitrary files, for example.
It was eventually killed because Apple decided it won't support it on the iPhone.
Javascript from 20 years ago tends to run just fine in a contemporary browser.
https://news.ycombinator.com/item?id=15886728
I’ve personally felt like it has been progressing, but I’m hoping you can expand my understanding!
Before WASM, the options were:
- require everyone to install an app to see visualizations
- just show canned videos of visualizations
- write and maintain a parallel Javascript version
Demo at https://throbol.com/sheet/examples/humanoid_walking.tb
And safari is coming soon.
So not too long untill it'll be usable.
Low memory usage and low CPU demand may not be a requirement for all websites because most are simple, but there are plenty of cases where JavaScript/TypeScript is objectively the wrong language to be using.
Banking apps, social network sites, chat apps, spreadsheets, word processors, image processors, jira, youtube, etc
Something as simple as multithreading is enough to take an experience from "treading water" to "runs flawlessly on an 8 year old mobile device". Accurate data types are also very valuable for finance applications.
Another use case is sharing types between the front and back end.
But CRUD developers don’t know/care about those, I guess.
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
https://playgama.com/blog/general/boost-html5-game-performan...
It is still the same jit calling itself, there is no reason it should be far slower than js-to-js
https://www.youtube.com/watch?v=4KtotxNAwME
https://www.youtube.com/watch?v=V1cqQRmVAK0
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
Tbh, most of the ideas so far to enable more direct access of Javascript APIs from WASM have a good chance of ruining WASM with pointless complexity.
Keeping those two worlds separate, but making sure that 'raw' calls between WASM and JS are as fast as they can be (which they are) is really the best longterm solution.
I think what people need to understand is that the idea of having 'pure' WASM browser applications which don't involve a single line of Javascript is a pipe dream. There will always be some sort of JS glue code, it might be generated and you don't need to directly deal with it, but it will still be there, and that's simply because web APIs are first and foremost designed for usage from Javascript.
Some web APIs have started to 'appease' WASM more by adding 'garbage free' function overloads, which IMHO is a good thing because it may help to reduce overhead on the JS side, but this takes time and effort to be implemented in all browser (and most importantly, a will by mostly "JS-centric" web people to add such helper functions which mostly only benefit WASM).
What would be nice is to use Wasm for component libraries instead, or for progressive enhancement (eg. add sophisticated autocomplete support to an input field).
I wish it was possible to disable WASM in browsers.
In Firefox at least: navigate to about:config and then `javascript.options.wasm => false` seems to do the job.
This causes any access to the global WebAssembly object to fail with `WebAssembly is not defined` (e.g. it won't be possible to instantiate wasm blobs).
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
WASM strings aren't JS strings; they're byte arrays. (WASM only knows about bytes, numbers (integers/floats), arrays, functions, and modules.)
In the old days, to pass a JS string to WASM, you'd first have to serialize the JS string to a byte array (with the JS TextEncoder API, usually), and then copy the byte array into WASM, byte by byte. That took two O(n) steps, one to serialize to a byte array, and another to copy the byte array.
Well, now, you can serialize the JS string to a byte array and then transmit it by reference to WASM, saving you a copy step. You still have one O(n) step to serialize the string to a byte array, but at least you only have to do it once, amirite?
If you want your WASM to call `document.createElement("div")`, you can pass `document` and `createElement` by reference from JS to WASM, then have WASM create an array `['d', 'i', 'v']`, and send all of those back to JS, where the JS will convert the array back into a JS string and then call `createElement.call(document, "div")`.
It's better, certainly, but it's never going to be as fast as just calling `document.createElement("div")` in JS, not as long as `createElement` requires a JS string instead of a byte array.
The proper fix would be to define a whole new "low-level DOM API", which would work exclusively with byte arrays.
That's what we're probably never going to get, because it would require all of the browser vendors (Apple, Google, Microsoft, and Mozilla) to standardize on a new thing, in the hopes that it was fast enough to be worth their trouble.
Today, they don't even want to discuss it; they think their time is better spent making existing web apps faster than making a new thing from scratch that ought to be faster, if only a multi-year (decade??) effort comes to fruition.
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
Remember when the trend was "semantic class names" and folk would bikeshed the most meaningful easy to understand naming schemes?
How we have fallen.
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
Just a note, but there is burgeoning support for this in "modern" WebAssembly:
https://github.com/bytecodealliance/jco/tree/main/examples/c...
If raw WebIDL binding generation support isn't interesting enough:
https://github.com/bytecodealliance/jco/blob/main/packages/j...
https://github.com/bytecodealliance/jco/blob/main/packages/j...
https://github.com/bytecodealliance/jco/blob/main/packages/j...
Support is far from perfect, but we're moving towards a much more extensible and generic way to support interacting with the DOM from WebAssembly -- and we're doing it via the Component Model and WebAssembly Interface Types (WIT) (the "modern" in "modern" WebAssembly).
What's stopping us the most from being very effective in browsers is the still-experimental browser shim for components in Jco specifically. This honestly shouldn't be blocking us at this point but... It's just that no one has gotten around to improving and refactoring the bindings.
That said, the support for DOM stuff is ready now (you could use those WIT interfaces and build DOM manipulating programs in Rust or TinyGo or C/C++, for example).
P.S. If you're confused about what a "component" is or what "modern" WebAssembly means, start here:
https://component-model.bytecodealliance.org/design/why-comp...
If you want to dive deeper:
https://github.com/WebAssembly/component-model
[1] https://github.com/awslabs/aws-wasm-checksums [2] https://github.com/landonxjames/aws-sdk-wasi-example
> (the async and stream support coming in Preview 3 are the real missing pieces for my usecases).
Currently this is a huge focus of most of the people working on stuff, and Jco is one of the implementations that needs to be done before P3 can ship, so we're hard at work on it.
> exciting step towards real useful polyglot libraries I have seen in years
I certainly agree (I'm biased) -- I think it's going to be a kind of tech that is new for a little bit and then absolutely everywhere and mostly boring. I think the docker arc is almost guaranteed to happen again, essentially.
The architectural underpinnings, implementation, and possibilities unlocked by this wave of Wasm is amazing -- truly awesome stuff many years in the making thanks to many dedicated contributors.
When writing most non-web software, you can usually write it quickly in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more developer effort) by writing your code in a lower-level language like C or Rust.
What developers are looking for is a way to take UI-focused DOM-heavy web apps, RIIR, and get a performance improvement in browsers.
That is not ready now. It's not even close. It might literally never happen.
What is ready now is a demo project where you can write WASM code against a DOM-like API running in Node.js.
What you have is an interesting demo, but that's not what we mean when we ask when WASM will get "DOM support."
Could you expand a bit on waht you expect would make this possible? What would you list as the most important blockers right now stopping people from getting there in your mind?
But, sure, in good faith, here's the problem.
Today, if you take a UI-focused DOM-heavy web app that makes lots and lots of DOM API calls (i.e. most JS web apps ever written) and try to rewrite it in Rust, you'll have to cross the boundary between JS and WASM over and over again, every time you use a DOM API. Every time you add/remove/update an element, or its styles, or handle a click event, you'll cross the boundary.
The boundary is slow because every time you touch a JS string (all CSS styles are strings!), you'll have to serialize it (in JS) into a byte array and send it into WASM land, do your WASM work, transfer back a byte array, and deserialize it into a JS string. (In the bad old days you had to copy the byte array in and out to do any I/O, but at least we have reference types now.)
And it's not just strings. All JS objects that you need to do actual work with have to be serialized/deserialized in this way, because WASM only knows about bytes, arrays, and opaque pointers. DOM elements/attributes, DOM style properties, DOM events (click events, keyboard events, etc.), all of them get slow when you transfer them in and out of WASM land, even with reference types, because of serialization/deserialization.
WASM interface types will make it easier to call JS from WASM, but as long as you're still calling JS in the end, rewriting in Rust will never make a DOM-heavy web app faster than writing it in JS.
That's why this sucks! Rewriting a Node.js app in Rust (or Go or Zig, etc.) normally yields huge performance gains (at huge developer effort), but rewriting a JS DOM-heavy web app in Rust just slaps Rust on top of JS; it usually makes it slower.
The only fix, as Daniel's article explains, would be to standardize a low-level DOM API, one that didn't assume that you can use JS strings, objects+properties, exceptions, promises, etc. This would be an unimaginably large standardization project.
You couldn't use WebIDL at all; you'd need to start by defining a new "low-level WebIDL." Then, you'd start standardizing the entire DOM API, all over again (or at least the most important parts) in low-level WebIDL, and then browser vendors could start implementing the low-level DOM API.
Then WASM could start calling that API directly. And maybe then you could rewrite web apps in Rust and have them get faster.
Until then, WASM is only faster for CPU-intensive tasks with I/O at the beginning/end, and otherwise it's only good for legacy code, where you don't have time to make it faster by rewriting it in JS.
(It should sound insane to anyone that taking C++ and rewriting it in JS would make it faster, but that's how it is on the web, because of this WASM boundary issue.)
So, what's the most important blocker? (gesture toward the universe) All of it??
There's a way to make JS functions callable by WASM, and that's how people build a bridge from WASM to the DOM, but it involves extra overhead versus some theoretical direct access.
It's just weird that by this logic, JavaScript - the more high-level, less typesafe and less performant language - would be the kernel, while performance-optimized WASM code would be the userspace program.
Would it be nice? Yes. But.
Every added feature is a trade-off between need -vs- outlay, overhead, complexity & other drawbacks. In order to justify the latter things, that "need" must be significant enough. I'd like to have DOM, but I don't feel the need is significant.
Some thoughts on use-cases:
1. "Inactive" or "in-instance" DOM APIs for string parsing, document creation, in-memory node manipulation, serialisation: this is all possible today in WASM with libraries. Having it native might be cool but it's not going to be a significantly different experience. The benefits are marginal here.
2. "Live / active" or "in-main-thread" direct access APIs to manipulate rendered web documents from a WASM instance - this is where the implementation details get extremely complex & the security surface area starts to really widen. While the use-cases here might be a bit more magical than in (1), the trade-offs are much more severe. Even outside of security, the prospect of WASM code "accidently" triggering paints, or slow / blocking main thread code hooked on DOMMutation events is a potential nightmare. Trade-offs definitely not worth it here.
Besides, if you really want to achieve (2), writing an abstraction to link main-thread DOM APIs to WASM postMessage calls isn't a big lift & serves every reasonable use-case I can think of.
You can even compile elixir into wasm-fx and run actor model, it's super fun and mad, but what you can't do is not deal with the technicalities.
So either you buy into one of the frameworks that are (not) built on top of wasm and lock into their paradigm or roll your own, because wasm proper doesn't even have any abstractions above numbers on a stack.
If you're writing a DOM UI heavy app, use JavaScript. Many WASM apps, like games, have no interest in the DOM. It's just more spec bloat.
My apps just go:embed all their assets and spawn a local webview as their UI, which is quite nice because client and server use the same schemas and same validations for e.g. web forms and the fetch/REST APIs.
Server-side-rendered components are implemented using a web components graph whose components can be String()ified into HTML.
It's a bit Experimental though, and the API in the components graph might change in the future:
https://github.com/cookiengineer/gooey
https://queue.acm.org/issuedetail.cfm?issue=3747201
I also don't think it has been posted here, so feel free to do so.
Things like QT and browsers became popular because people realize they could short circuit OS vendors asking developer to be loyal to them. The glue won.
But QT and browsers and JS are just hotfixes, they're not sound technologies, they're just glue.
One of the reasons I’m interested in wasm is to get away from the haphazardly evolved JS ecosystem…
Has been used by most of the Rust web frontend frameworks for years.
It all has to go through JS shims though, limiting the performance potential.
[1] https://docs.rs/web-sys/latest/web_sys/
[*]Yeah, the toolchains help solve this a bit, but it still makes me ship JS and wasm side-by-side.
You mean like a list of JS functions that are imported into the Wasm binary? This has been there since day one:
> Everyone having to write all their own glue[*] is just nuts at this point.Did you mean for the specific programming language you use? If so then that seems like a problem for the language implementor, not a problem with Wasm. Rust has wasm bindgen, Emscripten has their thing, and so on.
Does anybody know why is it such a big problem to add dom access to wasm?
In worst case, we should have a second option to Js (which is not typescript - typescript is just a lipstick on a pig). If wasm is not it, why not something different? Having dart in would be great.
Well, the article does a pretty good job of answering this specific question ;)
It gives good reasons why we can't have specific parts. Having the JavaScript Standard library in WebAssembly would be hard (was anyone actually asking for that?), and some of the modern APIs using promises or iterators wouldn't have a clear mapping. Also not everything could be zero-copy for every language
But the article doesn't do a very good job explaining why we can't have some dom access, at least for the 90% of DOM APIs not using JavaScript-specific features.
Most of the argument boils down to "you shouldn't want direct DOM access, because doing that would be work for these people, and we can instead have those other people do lots of work making the JavaScript bridge less painful. And anyways it's not clear if having these people make proper APIs would actually result in faster code than having those people do a separate sophisticated toolchain for each language"
It reads very much like a people and resource allocation problem rather than a technical challenge
The DOM is a Javascript API, so it uses 100% Javascript-specific features (every DOM manipulation requires accessing JS objects and their properties and lots of those properties and function args are Javascript strings) - none of those map trivially to WASM concepts.
It's a bit like like asking why x86 assembly code doesn't allow "C++ stdlib access", the question doesn't even make much sense ;)
Or is there something in the browser architecture that requires them to be JavaScript objects with the memory layout of the JavaScript engine, rather than just conceptually being objects?
The underlying problem is that you need to translate between a Javascript-idiomatic API and whatever is an idiomatic API in the language you compile into WASM, and idiomatic C, C++, Rust, ... APIs all look very different so there isn't a single answer. It's not WASM being relevant here, but the high level source language that's compiled into WASM, and how well the DOM JS API (and the Javascript concept it is built on) map to APIs for various 'authoring languages'.
The whole problem is really quite similar to connecting two higher level languages (e.g. Rust and C++) through a C API (since this is the common interface that both Rust and C++ can talk to - but that doesn't mean that you can simply send a Rust String to the C++ side and expect that you automatically get a std::string - this sort of magic needs to be explicitly implemented in a shim layer sitting between the two worlds) - and to use the string example, there is no such thing as a WASM string, instead how string data is represented on the WASM side depends on the source language that's compiled to WASM.
Of course you couldn't provide idiomatic versions for every language, but the JS shims also can't really do that. Providing something close to idiomatic C would be a huge step up, the language libraries can then either offer a C-like API or choose to build new abstractions on top of it
That's easier said than done because of details like that you can't build a JS string on the C++ side, the translation from string data on the WASM heap into a JS object needs to happen on the JS side.
But this is how all the Emscripten web API shims work, and they do this quite efficiently, and some of those shims also split their work between the JS and C/C++ side.
So to the programmer it does look like with Emscripten there's direct access to the (for instance) WebGL or WebGPU APIs, but in reality there's still quite a bit of JS code involved for each call (which isn't a problem really as long as no expensive marshalling needs to happen in such a call, since the call overhead from WASM into JS alone is really minimal).
On the one hand, JS DOM objects are idl generated wrappers for C++. In theory we can generate more WASM friendly wrappers.
On the other, the C++ code implementing the API will be tightly coupled to the entire JS type system and runtime. Not just the concept of an object but every single design decision from primitives to generators to dynamic types to prototypical inheritance to error handling...
Also, I believe the C++ DOM implementation itself is pretty tightly integrated with javascript and it's memory management e.g. nodes have references into the managed heap to use JS objects directly like EventListeners and js functions.
Creating a new non-JS DOM API doesn't sound intractable to me... but browsers annihilate my assumptions so it's probably millions of hours of effort and close to a rewrite...
Maybe I misunderstood, but isn't Dom access in essence an ability to change html tree? Since this is wasm, why would it need to reimplement js API and need type mappings? Couldn't it be something different?
It doesn't need to be the same API... but implementing a new DOM API that doesn't meet the w3c standard is a bit on the nose. It's meant to be language independent hence the IDL.
Looking into it, the IDL might insulate the existing API implementation from JS to a greater degree than I assumed above. Apparently there might be horrors lurking in the binding generation code though. You can poke around in blink:
https://github.com/chromium/chromium/blob/main/third_party/b...
> isn't Dom access in essence an ability to change html tree
It might be more accurate to look at it the other way: our current DOM implementations were created to implement the "DOM API Standard". The standard dictated the types and how reading/mutation works.
> need type mappings
I can't imagine how it can avoid type mappings for things like e.g. creating unattached bits of DOM or binding callbacks that receive bits of DOM.
Personally I might be happy for a tiny WASM api... but then foresee 10 years of maddening omissions, security bugs and endless moaning because they didn't just implement the standard everyone already knew.
When writing most non-web software, you can usually write it easily in a high-level language (with a rich standard library and garbage collection), but you can get better performance (with more effort) by writing your code in a lower-level language.
WASM seems like an opportunity to get better performance by rewriting JavaScript web apps in lower-level languages like C or Rust, but it doesn't work that way, because of standardization.
Today, the web's core DOM APIs are defined in standards committees as inherently JavaScript APIs, in the form of WebIDL documents. https://developer.mozilla.org/en-US/docs/Glossary/WebIDL
When defining standardized APIs in WebIDL, WebIDL assumes that you can use JavaScript strings, JavaScript objects + properties, JavaScript Exceptions, JavaScript Promises, JavaScript garbage collection, and on and on and on. Almost all of the WebIDL specification is about the dozens of types that it assumes the platform already provides. https://webidl.spec.whatwg.org/
WASM doesn't have any of those things.
No one has ever standardized a DOM API for low-level languages. You'd need to start by standardizing a new "low-level API" for DOM access, and presumably a new low-level WebIDL to define those standards.
Designing the web by committee makes it hard to add/change stuff in browsers. You have to get Apple, Google, Microsoft, and Mozilla to agree on literally everything. (Defining WebIDL itself has taken decades!)
It can be hard to even get agreement from browser vendors to discuss the same topic, to even just get them to read your proposed standards document and to say "no, we won't implement it like this, because..." You have to convince them that the standard you're proposing is one of their top priorities. (And before you can do that, you have to convince them to pay attention to you at all.)
So, someone would have to persuade all of the browser vendors that one of their top priorities should be to invent a new way to standardize DOM APIs and begin the process of standardizing DOM access on top of a lower-level IDL.
Today, the browser vendors aren't convinced that this is worth their time. As the article says:
> For now, web folks don't seem to be sold on the urgency of this very large project. There is no active work by browser vendors in this direction.
And that's why you can't get top-notch performance by rewriting your web app in Rust. You can rewrite your web app in Rust, and it can access JS APIs, but when touching the DOM APIs, Rust has to interop with JS. Rust's interop with JS is no faster than JS itself (and it's often slower, because it requires added glue code, translating between JS and WASM).
As a result, if you're writing a web app, you mostly have to do it in JS. If you have some very CPU-intensive code, you can write that in WASM and slowly copy the result of your computation to JS, as long as you don't cross the boundary between WASM and JS too often.
Alternately, if you have existing code in non-JS languages, you can port it to web via WASM, but it'll probably run slower that way; the best performance improvement you can do is to rewrite it in JS!
maybe my read is wrong, but everything i look at today just screams to me that the web is extremely poorly designed; everything about it is simply wrong.
Maybe we should stop overdesigning things and keep it simple. WASM needs more tooling around primitive types, threading, and possibly a more flexible memory layout than what we have now.
Emscripten + Embind allow you to subclass and implement C++ interfaces in TypeScript, and easily call back and forth, even pass typed function pointers back and forth, using them to call C++ from TypeScript and TypeScript from C++!
Embind: https://emscripten.org/docs/porting/connecting_cpp_and_javas...
Interacting with Code: https://emscripten.org/docs/porting/connecting_cpp_and_javas...
Embind's bind.cpp plumbing: https://github.com/emscripten-core/emscripten/blob/main/syst...
C Emscripten macros (like EM_ASM_): https://livebook.manning.com/book/webassembly-in-action/c-em...
Emscripten’s embind: https://web.dev/articles/embind
I'm using it for the WASM version of Micropolis (open source SimCity). The idea is to be able to cleanly separate the C++ simulator from the JS/HTML/WebGL user interface, and also make plugin zones and robots (like the monster or tornado or train) by subclassing C++ interface and classes in type safe TypeScript!
emscripten.cpp binds the C++ classes and interfaces and structs to JavaScript using the magic plumbing in "#include <emscripten/bind.h>".
There is an art to coming up with an elegant interface at the right level of granularity that passes parameters efficiently (using zero-copy shared memory when possible, i.e. C++ SimCity Tiles <=> JS WebGL Buffers for the shader that draws the tiles) -- see the comments in the file about that):
emscripten.cpp: https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
Here's the WebGL tile renderer that draws the tiles directly out of a Uint16Array pointing into WASM memory:https://github.com/SimHacker/MicropolisCore/blob/main/microp...
The corresponding C++ source and header and TypeScript files define the callback interface and plumbing:
callback.h defines the abstract Callback interface, as well as a ConsoleCallback interface that just logs to the JS console, for debugging:
callback.h: https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
callback.cpp implements just the concrete ConsoleCallback interface in C++ with "EM_ASM_" glue to call out to JavaScript to simply log the parameters of each call:callback.cpp: https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
js_callback.h contains an implementation of the Callback interface that caches a "emscripten::val jsCallback" (an enscripten value reference to a JS object that implements the interface), and uses jsCallback.call to make calls to JavaScript:js_callback.h: https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
Then emscripten/embind generates a TypeScript file that defines the JS side of things:micropolisengine.d.ts: https://github.com/SimHacker/MicropolisCore/blob/main/microp...
Then you can import that TypeScript interface (using a weird "<reference path=" thing I don't quite understand but is necessary), and implement it in nice clean safe TypeScript:https://github.com/SimHacker/MicropolisCore/blob/main/microp...
It's all nice and type safe, and Doxygen will even generate documentation for you:https://micropolisweb.com/doc/classJSCallback.html
And it even works, and it's pretty fast! (Type "9" to go super fast, but for the love of god DO NOT PRESS THE SPACE BAR!!!)
https://micropolisweb.com
This subject is interesting because your typical college educated developer HATES the DOM with extreme passion, because it’s entirely outside their area of comfort. The typical college educated developer is typically educated to program in something like Java, C#, or C++ and is how the world is supposed to work. The DOM doesn’t work like that. It’s a graph of nodes in the form of a tree model, and many developers find that to be scary shit. That’s why we have things like jQuery, Angular, and React.
These college educated developers also hate JavaScript for the same reasons. It doesn’t behave like Java. So for many developers the only value of WASM is as JavaScript replacement. WASM was never intended or positioned to be a JavaScript replacement so it doesn’t get used very often.
Think about how bloated and slow the web could become if WASM were a JavaScript replacement. Users would have to wait on all the run time and dependencies to download into the WASM sandbox and then open like a desktop application, but then all that would get wrapped in something like Angular or React because the DOM is still scary.