Skip to content

Expose a new function for ViewTarget and MainTargetTextures #18957

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

aojiaoxiaolinlin
Copy link

Objective

  • This PR makes the creation of ViewTarget public, allowing it to be used without being bound to a Camera entity. This enables multiple post-processing steps on a single texture, which is particularly useful when working with Render to Texture. When generating a large number of Camera entities, the FPS can drop significantly, causing lag.

Solution

  • Added a new function for ViewTarget and MainTargetTextureswhile ensuring the internal fields' accessibility remains intact.
  • Exposed the pipeline cache ID field of ViewUpscalingPipeline .

Testing

  • The changes are minimal, and the CI should catch any issues. There were no errors when running locally.

Showcase

Here is an example showcasing the result:
Showcase

In the animation, the glowing effect utilizes multiple post-processing steps on a single texture, with different parameters applied to each glowing part.

…o allow rendering without a Camera. This is useful when performing multiple post-processing steps on a single texture!
Copy link
Contributor

Welcome, new contributor!

Please make sure you've read our contributing guide and we look forward to reviewing your pull request shortly ✨

@JMS55
Copy link
Contributor

JMS55 commented Apr 27, 2025

I'm not sure that I get the point of this. What's the difference between this, and implementing double buffering between two textures yourself?

@aojiaoxiaolinlin
Copy link
Author

I'm not sure that I get the point of this. What's the difference between this, and implementing double buffering between two textures yourself?我不确定我理解这个点。这和自己在两个纹理之间实现双缓冲有什么区别?

Hmm... there probably wouldn't be much difference.
If I were to implement it myself, I'd basically have to copy ViewTarget anyway, haha.

@alice-i-cecile alice-i-cecile added A-Rendering Drawing game state to the screen C-Usability A targeted quality-of-life change that makes Bevy easier to use D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Apr 29, 2025
@Touma-Kazusa2
Copy link

I saw your question on Discord: https://discord.com/channels/691052431525675048/1331533973133725696/1331813878333444240.
For multiple post-processing effects, you just need to add nodes like in the Bevy example.
As for applying post-processing to specific objects, I looked it up online — maybe you can use the stencil buffer for that?

@aojiaoxiaolinlin
Copy link
Author

I saw your question on Discord: https://discord.com/channels/691052431525675048/1331533973133725696/1331813878333444240.
For multiple post-processing effects, you just need to add nodes like in the Bevy example.

I haven't fully solved the problem during our discussions on Discord.
The current implementation of Flash filter rendering is based on how it's done in the Ruffle project, as shown in the RenderDoc capture below.
If it's also possible to achieve this using the Stencil Buffer, I’d greatly appreciate any insights or guidance you could share.
render-01
render-02
render-03
render-04

@Touma-Kazusa2
Copy link

Sorry for my lack of clarity earlier. What I meant was: I’m trying to understand how your code connects to your goal. Specifically, how does exposing the pipeline cache ID field of ViewUpscalingPipeline help enable multiple post-processing steps on a single texture and reduce the stuttering when using many cameras?

Maybe a small demo could help illustrate the effect?

From the image you showed, I think I now get what you're aiming for. The real goal is not applying multiple post-processing steps to regions of a single texture, but rather rendering multiple textures in real-time and using them as sprites, right? Let me know if I misunderstood.

If that’s the case, it might be helpful to state your goal more explicitly so others can better follow what you're trying to achieve.

@aojiaoxiaolinlin
Copy link
Author

First, a single shape needs to be constructed from multiple Mesh2d instances (each made from parsed vertex data), because a single visual element may consist of multiple materials.
Then, the required texture size for applying filter effects to that shape is calculated.
Since I need to perform multiple post-processing steps on the shape (using intermediate textures and not relying on a camera), I use ViewTarget to enable double-buffered rendering for that purpose.
Finally, the resulting texture of this structure is composited onto the main render texture.

@Touma-Kazusa2
Copy link

So the goal of the code is to implement multiple post-processing steps?
If that's the case, like I mentioned in my initial comment, you can simply add more nodes — view_target.post_process_write() already takes care of double buffering for you.

@aojiaoxiaolinlin
Copy link
Author

However, I'm currently unable to apply this process to a single texture only — I don't need to apply any post-processing to the entire main render target.Would it be possible to achieve this using the Stencil Buffer?
The goal is to render many graphical effects within a single frame.

@Touma-Kazusa2
Copy link

Touma-Kazusa2 commented May 29, 2025

I'm not sure — maybe. I don’t have much experience with this kind of thing.
Also, are the different positions possibly using the same post-processing steps but with different parameters?
If so,that sounds like it could get a bit complicated...

@aojiaoxiaolinlin
Copy link
Author

Each shape requires a different processing pipeline.
For example, the glow filter in Flash involves first rendering a blur effect, then compositing the result back onto the texture.
In contrast, a color filter only needs to compute and apply a corrected color.
Even when the processing steps are the same, the parameters can vary.
What’s more, in Flash, these filter parameters can change every frame, making the system highly dynamic.
Currently, I'm using a custom render graph to handle this functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Usability A targeted quality-of-life change that makes Bevy easier to use D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Needs-Review Needs reviewer attention (from anyone!) to move forward
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

4 participants