Speed up project selector matching #1381
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When a request comes in to the language server for a document we have to determine what project that document belongs to. This is because your workspace can have many tailwind config files and potentially across different Tailwind CSS versions.
The process for this happens in two stages:
content
array in v3 or detected sources in v4, approximate path matches based on folder, etc…The lowest priority match wins. If multiple projects match a document at the same priority then first match wins. If a project contains a selector that discards a particular document then that project is removed from consideration before continuing.
Now for the problem, we were re-compiling globs over and over and over again. Normally, for a small project this isn't an issue but automatic content detection means that a large number of paths can be returned. And if you have lots of projects? Welp… this adds up. What's more is that when VSCode needed to compute document symbols requests were made to our language server (… i don't know why actually) which then incurred a perf hit caused by these globs being repeatedly recompiled… and since this is effectively a single threaded operation it delays any waiting promises.
So this PR does two things:
In a realworld project this lowered the time to show suggestions from emmet from 4 SECONDS (omg) to about 15-20ms on an M3 Max machine.
aside: There was also sometimes a delay in time even getting completion requests due to the single-threaded nature of JS. That could also end up being multiple seconds. So in reality the time could be from range from 4–8s depending on when you made the request.