-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
Expose a new function for ViewTarget and MainTargetTextures #18957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…o allow rendering without a Camera. This is useful when performing multiple post-processing steps on a single texture!
Welcome, new contributor! Please make sure you've read our contributing guide and we look forward to reviewing your pull request shortly ✨ |
I'm not sure that I get the point of this. What's the difference between this, and implementing double buffering between two textures yourself? |
Hmm... there probably wouldn't be much difference. |
I saw your question on Discord: https://discord.com/channels/691052431525675048/1331533973133725696/1331813878333444240. |
I haven't fully solved the problem during our discussions on Discord. |
Sorry for my lack of clarity earlier. What I meant was: I’m trying to understand how your code connects to your goal. Specifically, how does exposing the pipeline cache ID field of ViewUpscalingPipeline help enable multiple post-processing steps on a single texture and reduce the stuttering when using many cameras? Maybe a small demo could help illustrate the effect? From the image you showed, I think I now get what you're aiming for. The real goal is not applying multiple post-processing steps to regions of a single texture, but rather rendering multiple textures in real-time and using them as sprites, right? Let me know if I misunderstood. If that’s the case, it might be helpful to state your goal more explicitly so others can better follow what you're trying to achieve. |
First, a single shape needs to be constructed from multiple Mesh2d instances (each made from parsed vertex data), because a single visual element may consist of multiple materials. |
So the goal of the code is to implement multiple post-processing steps? |
However, I'm currently unable to apply this process to a single texture only — I don't need to apply any post-processing to the entire main render target.Would it be possible to achieve this using the Stencil Buffer? |
I'm not sure — maybe. I don’t have much experience with this kind of thing. |
Each shape requires a different processing pipeline. |
Objective
ViewTarget
public, allowing it to be used without being bound to aCamera
entity. This enables multiple post-processing steps on a single texture, which is particularly useful when working with Render to Texture. When generating a large number of Camera entities, the FPS can drop significantly, causing lag.Solution
new
function forViewTarget
andMainTargetTextures
while ensuring the internal fields' accessibility remains intact.ViewUpscalingPipeline
.Testing
Showcase
Here is an example showcasing the result:

In the animation, the glowing effect utilizes multiple post-processing steps on a single texture, with different parameters applied to each glowing part.