Is this Triton's reply to NVIDIA's tilus[1]. Tilus is suposed to be lower level (e.g. you have control over registers). NVIDIA really does not want the CUDA ecosystem to move to Triton as Triton also supports AMD and other accelerators. So with Gluon you get access to lower level features and you can stay within Triton ecosystem.
It sounds like they share that goal. Gluon is a thing because the Triton team realized over the last few months that Blackwell is a significant departure from the Hopper, and achieving >80% SoL kernels is becoming intractable as the triton middle-end simply can't keep up.
The fact that the "language" is still Python code which has to be traced in some way is a bit off-putting. It feels a bit hacky. I'd rather a separate compiler, honestly.
Mojo for python syntax without the ast walking decorator, cuda for c++ syntax over controlling the machine, ah hoc code generators writing mlir for data driven parametric approaches. The design space is filling out over time.
This is pretty common among these ml toolchain, and not a big deal. They use pythons ast lib and the function annotations to implement an ast walker and code generator. It works quite well.
Is this Triton's reply to NVIDIA's tilus[1]. Tilus is suposed to be lower level (e.g. you have control over registers). NVIDIA really does not want the CUDA ecosystem to move to Triton as Triton also supports AMD and other accelerators. So with Gluon you get access to lower level features and you can stay within Triton ecosystem.
[1] https://github.com/NVIDIA/tilus
It sounds like they share that goal. Gluon is a thing because the Triton team realized over the last few months that Blackwell is a significant departure from the Hopper, and achieving >80% SoL kernels is becoming intractable as the triton middle-end simply can't keep up.
Some more info in this issue: https://github.com/triton-lang/triton/issues/7392
Also it REALLY jams me up that this is a thing, complicating discussions: https://github.com/triton-inference-server/server
Why is zog so popular these days? Seems really cool but I have yet to get the buzz / learn it.
Is there a big reason why Triton is considered a "failure"?
Not to be confused with the Gluon UI toolkit for Java : https://gluonhq.com/products/javafx/
The fact that the "language" is still Python code which has to be traced in some way is a bit off-putting. It feels a bit hacky. I'd rather a separate compiler, honestly.
Mojo for python syntax without the ast walking decorator, cuda for c++ syntax over controlling the machine, ah hoc code generators writing mlir for data driven parametric approaches. The design space is filling out over time.
This is pretty common among these ml toolchain, and not a big deal. They use pythons ast lib and the function annotations to implement an ast walker and code generator. It works quite well.
Yeah that struck me as odd. It's more like a Python library or something.
It’s a dsl not a library. The kernel launch parameters and the ast walk generate ir from the Python.
Not to be confused with gluon the embbedable language in Rust: https://github.com/gluon-lang/gluon