-
-
Notifications
You must be signed in to change notification settings - Fork 35.7k
WebGPURenderer Performance Significantly Lower Than WebGLRenderer #30560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
To further explain the major performance gap: This is how ![]() There are no major state changes between the draw calls (except for some single uniform updates which are not displayed in the list). Compared to that, ![]() As you can see, there is a considerable amount of state changes between each #30562 fixes the VAO related issues but they are unfortunately negligible compared to the UBO related overhead. I guess we need a different approach in the renderer to minimize these state changes. @sunag @RenaudRohlinger @aardgoose Would be a single UBO for all object scope uniforms a potential solution? |
Nice catch with the VAO! Related (I like the CommonUniformBuffer interface): Unless we implement a pool system I don't think we can use a single UBO for all object-scope uniforms as a potential solution since we'd be very limited by the number of meshes. With a typical 16KB max block size and each mat4 taking 64 bytes in std140, that limits us to about 256 meshes. |
Good to know that. I hope we can revisit #27388 soon. |
This is the limitation per draw call as guaranteed by the WebGL 2 specification. You can have a larger buffer bound and adjust the offset dynamically. I've shared many words of this and scheduling in general, but I'm not sure they've been heard, reading this. |
Can this issue be clarified, is it performance with the webGL fallback backend that the OP has an issue with or the WebGPU backend or both? Re #27388, I'll revisit it in a few weeks time, I recall looking at applying a similar mechanism to the WebGL fallback, but found that more complicated because of the different api styles rather than a buffer size issue, although I'd have to check. |
Both backends have the performance issue. |
When I understand the spec correctly, you can use https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml In WebGPU, it should be the https://www.w3.org/TR/webgpu/#gpubindingcommandsmixin-setbindgroup It seems both APIs are not used in #27388 yet. |
#27388 only applies to webGPU, the dynamicOffsets isn't really useful in the current renderer AFAICS. The offset in createBindGroup is all that is required to use a single buffer. The issue with webGL is that the buffer updates and draw calls are interleaved and executed in a single pass, whereas the webGPU renderer updates the arrayBuffer and queues the draw calls for later execution, this allows the single buffer update to be inserted before the queued draw calls are executed. For a WebGL solution you need to have two passes through the render list.
This doesn't match the current code structure. |
I don't know if this is related to this point or if it is a separate topic. Since r173 I have noticed a frame drop (WebGPURenderer). Suddenly the frame rate drops from 120 fps to 30 fps. Since I haven't changed anything in the app itself, just the threejs release from r172 to r173 and now r174, I keep noticing this. There is no error message, which makes the analysis more difficult. The app runs at 120 fps and suddenly it drops to 30 fps. Sporadically it peaks back to 120fps. Since I'm not allocating any new buffers or new geometries, that's strange. Since up until r172 it always ran at 120fps, something must have happened from r172 to r173. |
@RenaudRohlinger Your extension seems to make the frame drop less frequent. I was curious and implemented it in my apps. But the frame drop still occurs. An interesting phenomenon. Both times I just started the app and nothing else. But in the first picture you can see a constant 120fps, which was always the normal case ![]() ![]() In the second screenshot, threejs seems to fall into something like a safe mode with 30fps. ![]() It's definitely an improvement, because now it jump from 30fps to 120fps instead from 120fps to 30fps. I've tested opening the console several times and so far with your extension it always triggers the jump from the faulty 30fps to 120fps. That's very good because it proves that your expansion is on the right track. |
Thanks @Spiri0! Could you try this on different web browser that supports well WebGPU (Chrome Canary, Chrome Beta, Edge) and potentially a different device to confirm that's more on the threejs side rather than how it interact with your browser/GPU? Also to know your GPU model would greatly help. |
Good point. I use slimJet because Chrome on Linux had limited WebGPU support for some time. But with the current version of Chrome it's working normally again. Then slimJet is the cause. I tested it again with maximum WebGPU limits and with the latest Chrome it runs at 120fps again. That's reassuring. |
Description
Summary
When switching from WebGLRenderer to WebGPURenderer, I experience a significant drop in performance. The same scene, containing thousands of non-instanced meshes, runs smoothly at 60 FPS on WebGL but drops to 15 FPS on WebGPU, a 4x decrease in performance.
Expected Behavior
WebGPURenderer should provide comparable or better performance than WebGLRenderer, given its modern API and intended improvements over WebGL.
Current Behavior
Rendering 20,000 non-instanced basic cube meshes:
WebGLRenderer: ~60 FPS on Mac (Apple Silicon M1 Pro)
WebGPURenderer: ~15 FPS (4x slower)
No errors or warnings appear in the Chrome console.
Reproduction steps
Code
see live example below
Live example
https://jsfiddle.net/15zfestk/1/
Screenshots
No response
Version
r.0.173.0
Device
Desktop
Browser
Chrome
OS
MacOS
The text was updated successfully, but these errors were encountered: