Description
Issue Description
During busy times (aka maybe 100 concurrent users), our heroku application started to get really slow. It was reaching 1GB - 2GB of RAM memory on our server (a pretty insane amount for our small scale). When we were running lots of queries, it would get super slow or it would just eventually H10 crash the heroku server. As we started analyzing the memory heap, it was apparent that, instead of adding memory and returning to the normal baseline, the queries were actually just building memory linearly. Something was just not getting garbage collected, so eventually the memory was just building every time we ran an API path. This was happening very consistently and easily reproducible. Anyway, we found other issues on Github that talked about how certain versions of parse server and node actually have a memory leak where a queries cache is never garbage collected. Parse server basically creates a new cache for each individual query. One fix we have found (and other people have mentioned in Github issues) was to just switch our cacheAdapter to Redis. This basically just moves all the caching logic to Redis, and Redis seems to not have this memory issue. Another fix seems to be that people found a certain version of parse-server and also node version where the memory issue just did not happen. We tried to upgrade to the latest node version (12.16.0, which is recommended for most users on the Node website) and the latest parse-server version (3.10.0). Unfortunately, we still had the memory issue when we were on the two latest versions. So, our current fix is to just use Redis. But, this seems like a major issue that the latest parse server version (3.10.0) and latest node version (12.16.0) have a major memory leak on every query. It seems like something like this should work right out of the box, and I'm wondering if others are having this issue also.
Steps to reproduce
- Set your parse-server version to 3.10.0
- Set your node-version to 12.16.0
- Run a heroku local server
- Make a simple cloud call that runs a very simple query
- Have a heap dump that prints before and after the query is run
- Keep running the cloud call like 20 times over and over and you can watch how the memory changes
Expected Results
You would expect for the heap dump to initially print at some baseline (~30 MB) and then when you run a query it will jump up a bit (maybe around ~35MB). Then, when you finish the API call, it just returns to baseline (30MB) because node should garbage collect. This way, every time you run a query, it uses some memory, and then it just goes back to normal until the next query.
Actual Outcome
It's not garbage collecting. So the first time you run the query, you go from 30 MB, then to 35MB, then to 40MB, then to 45MB. It just keeps growing linearly, until eventually, it gets up to around 2GB. Then, everything crashes because that goes above the heroku default limit for RAM memory. My theory is that it seems to be holding onto the cache of each query and not garbage collecting it.
Environment Setup
-
Server
- parse-server version (Be specific! Don't say 'latest'.) : 3.10.0
- Operating System: MacOS
- Hardware: Macbook 16 In Pro
- Localhost or remote server? (AWS, Heroku, Azure, Digital Ocean, etc): Local heroku server (it also builds this memory issue on the remote heroku server, which is harder to debug)
-
Database
- MongoDB version: 3.6.12
- Storage engine: Not sure
- Hardware: Not sure
- Localhost or remote server? (AWS, mLab, ObjectRocket, Digital Ocean, etc): Mlab