Skip to content
This repository was archived by the owner on Feb 16, 2021. It is now read-only.

Infrastructure Performance Testing and Expectations

Jonathan Claudius edited this page Mar 2, 2017 · 4 revisions

This document will describe the theoretical expectation vs. real-life performance testing expectations.

Based on some basic observation, we would expect a worker to complete a given scan of a live SSH service in about ~3 seconds. So, being extremely naive of any other components, we might expect service throughput to be something like this...

# of Workers # of request per second # of request per min
1 0.33 20
2 0.67 40
3 0.99 60
4 1.33 80
5 1.67 100

Next up we gathered data from real targets to established an empirical avg per worker scan as 2.11 seconds...

# of Workers # of request per second # of request per min
1 0.47 28.46
2 0.95 56.91
3 1.42 85.37
4 1.90 113.82
5 2.37 142.28

After this we will run actual scans against real-life targets and monitor the end-to-end performance, which will include the non-worker functions, so we can see what our admin overhead is on tasking/queing/etc. It will also consider that some operations are happening in parallel, like the inbound queuing, so there may be gains or minimal time loss there in terms of positive/negative queue pressure.

# of Workers # of request per second # of request per min
1 ? ?
2 ? ?
3 ? ?
4 ? ?
5 ? ?

In talking with April, Observatory #'s are around in the hundreds per minute. We likely won't be enabled by default so our scale will be much less, but to plan for worst case, let's shoot for 500/min as our max threshold, but maybe phase 1 we can target 100/min.

Clone this wiki locally