Description
🚀 Feature Proposal
It should be possible to configure the client to control the max open sockets and max idle sockets across all nodes. In addition, the client should expose diagnostic information about all the open/idle sockets for observability.
Motivation
We want to have more control over the amount of open sockets Kibana creates when using elasticsearch-js.
It seems like by default elasticsearch-js will create an agent for each node:
- Client passes nodes to pool:
https://github.com/elastic/elasticsearch-js/blob/8.2/src/client.ts#L231 - Pool creates connection for each node:
https://github.com/elastic/elastic-transport-js/blob/8.2/src/pool/BaseConnectionPool.ts#L154 - Each connection creates a new http agent/undici pool instance:
https://github.com/elastic/elastic-transport-js/blob/8.2/src/connection/HttpConnection.ts#L83
https://github.com/elastic/elastic-transport-js/blob/8.2/src/connection/UndiciConnection.ts#L113
While it's possible to specify the option agent: () => http.Agent
which can return a single agent instance for use with HttpConnection
it doesn't seem possible to use a single Undici pool for all nodes.
As a result, it's not possible to configure the maximum open and idle sockets across all connections/nodes in a way that's compatible with both HttpConnection
and UndiciConnection
.
We seem to be doing round robin load balancing across all nodes:
https://github.com/elastic/elastic-transport-js/blob/81316b1e0d01fadf7ada678bb440af56c6f74f4d/src/Transport.ts#L236
But because nodes don't share a connection pool, it seems to diminish the value of the WeightedPool. If a Node goes down the client will still choose that Node in a round-robin fashion sending 1/N requests to the dead node. WeightedPool ignores the selector
parameter to getConnection