Hi! Thanks for this tutorial. I notice you work with Ingo Wald at the fabled Utah graphics program. I just recently linked Ray Tracing Gems right here on HN and am working my way through it. Very insightful ;)
Am curious if your group investigates using WebGPU in contexts other than rendering? Web scale GPU compute clusters, wgpu on native, scientific simulations, ai research ... as just a few possible examples?
Yeah! I've started to look some WebGPU compute applications with other students in my group and I think there could be some cool use cases, like the ones you mention. It sounds a bit odd, but yeah WebGPU on native (by directly using Dawn, or wgpu-rs) is actually pretty compelling as a cross-platform low-level graphics API.
What's really cool is that compute and rendering using WebGPU can get near-native level performance. So a lot of scientific applications (which typically rely on more FLOPs/parallel processing) can be implemented in WebGPU compute without sacrificing much performance. I'm not sure how many simulations would be ported to WebGPU, since they usually end up targeting large scale HPC systems and CUDA, but for visualization applications I think the use case is pretty compelling, especially for portability and ease of distribution. On the compute side, I implemented a data-parallel Marching Cubes example: https://github.com/Twinklebear/webgpu-experiments , and found the performance is on par with my native Vulkan version. You can try it out here: https://www.willusher.io/webgpu-experiments/marching_cubes.h... . There is a pretty high first-run overhead, but try moving the slider around some to see the extraction performance after that. WebGPU for parallel compute, combined with WebASM for serial code (or just easily porting older native libs), will make the browser a lot more capable for compute heavy applications. You could also combine these more capable browser clients with a remote compute server, where the server can do some heavier processing while the client can do medium scale stuff to reduce latency or work on representative subsets of the data.
Am curious if your group investigates using WebGPU in contexts other than rendering? Web scale GPU compute clusters, wgpu on native, scientific simulations, ai research ... as just a few possible examples?