Description
Should webGL mode be backed by a framebuffer (or p5.Graphics object)?
I was thinking a bit about webGL mode and how we might achieve better parity with the 2d renderer. Currently, webGL mode renders content directly to the canvas. If instead we rendered to a framebuffer, we could use the main scene as a texture, and add a post processing stage, which is what is actually drawn to the screen. In that post processing stage, it would be trivial to write shaders that perform the filter functions we have set up in 2d mode. There might be other benefits we haven't considered as well.
One drawback from this, is that webGL1 framebuffers do not support multisampling, so geometry would look a bit aliased, unless we implement some kind of multisampling or antialiasing ourselves (which could be controlled with noSmooth()
/ smooth()
). webGL2 does support MSAA, but I'm not sure if we're ready to take the plunge into webGL2 yet. If we used an additional p5.Graphics layer, it could be slightly less performant, but wouldn't face the anti-aliasing issues.
I haven't fully thought this through, but there it's likely that there's other benefits and drawbacks to this strategy as well. Thoughts?