Skip to content

Commit cc02dcd

Browse files
authored
Merge pull request #2 from nginxinc/akker
Akker
2 parents 8237101 + d6ef699 commit cc02dcd

6 files changed

+112
-34
lines changed
3.96 MB
Binary file not shown.

docs/NginxKubernetesLoadbalancer.md

+42-25
Original file line numberDiff line numberDiff line change
@@ -3,23 +3,24 @@
33
<br/>
44

55
- Build an Nginx Kubernetes Loadbalancer Controller for MVP
6-
- Provide a functional replacement for the "Loadbalancer Service Type" external to an On Premise K8s cluster.
6+
- Provide a functional replacement for the "Loadbalancer Service Type" external to an On Premises K8s cluster.
77
- Chris Akker / Jan 2023 / Initial draft
8+
- Steve Wagner / Jan 2023 / Initial code
89

910
<br/>
1011

1112
## Abstract:
1213

13-
Create a new K8s Controller, that will monitor specified k8s Service Endpoints, and then send API calls to an external NginxPlus server to manage Nginx Upstream server blocks.
14-
This is will synchronize the K8s Service Endpoint list, with the Nginx LB server's Upstream block server list.
15-
The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`.
16-
With the NginxPlus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises k8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer".
14+
- Create a new K8s Controller, that will monitor specified k8s Service Endpoints, and then send API calls to an external NginxPlus server to manage Nginx Upstream server blocks.
15+
- This is will synchronize the K8s Service Endpoint list, with the Nginx LB server's Upstream block server list.
16+
- The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`.
17+
- With the NginxPlus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises k8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer".
1718

1819
<br/>
1920

2021
## Solution Description:
2122

22-
When running a k8s Cluster On Premise, there is no equivalent to a Cloud Provider's Loadbalancer Service Type. This solution and new software is the TCP load balancer functional replacement.
23+
When running a k8s Cluster On Premises, there is no equivalent to a Cloud Provider's `Loadbalancer` Service Type. This solution and new controller software is the TCP load balancer functional replacement.
2324

2425
When using a Cloud Provider's Loadbalancer Service Type, it provides 3 basic functions for External access to the k8s pods/services running inside the cluster:
2526

@@ -29,9 +30,13 @@ When using a Cloud Provider's Loadbalancer Service Type, it provides 3 basic fun
2930

3031
This is often called "NLB", a term used in AWS for Network Load Balancer, but functions nearly identical in all Public Cloud Provider networks. It is not actually a component of K8s, rather, it is a service provided by the Cloud Providers SDN (Software Defined Network), but is managed by the user with K8s Service Type LoadBalancer definitions/declarations.
3132

32-
**This Solution uses NGINX to provide an alternative to #3, the TCP loadbalancing from PublicIP to k8s NodePort.**
33+
<br/>
34+
35+
>**This Solution uses NGINX to provide an alternative to #3, the TCP loadbalancing from PublicIP to k8s NodePort.**
3336
34-
Note: This solution is not for Cloud-based K8s clusters, only On-Premise K8s clusters.
37+
Note: This solution is not for Cloud-based K8s clusters, only On Premises K8s clusters.
38+
39+
<br/>
3540

3641
## Reference Diagram:
3742

@@ -43,7 +48,7 @@ Note: This solution is not for Cloud-based K8s clusters, only On-Premise K8s clu
4348

4449
## Business Case
4550

46-
- Every On Premise Kubernetes cluster needs this Solution, for external clients to access pods/service running inside the cluster.
51+
- Every On Premises Kubernetes cluster needs this Solution, for external clients to access pods/service running inside the cluster.
4752
- Market opportunity is at least one NginxPlus license for every k8s cluster. Two licenses if you agree that High Availability is a requirement.
4853
- Exposing Pods and Services with NodePort requires the use of high numbered TCP ports (greater than 30000 by default). Lower, well-known TCP port numbers less than 1024 are NOT allowed to bind to the k8s Nodes' IP address. This contradicts the ephemeral dynamic nature of k8s itself, and mandates that all HTTP URLs must contain port numbers unfamiliar to everyone.
4954
- There is a finite limit of available NodePorts available, as 30000-32767 is the default range, leaving ~ 2768 usable ports.
@@ -74,6 +79,7 @@ Why not Nginx OpenSource? Nginx Open Source does not have the API endpoint and
7479
- Nginx-lb-https - the Nginx LB Server Upstream block that represents the mapped Nginx Ingress Controller(s) `Host:NodePort` Endpoints for https
7580
- NodePort nginx-ingress Service - exposes the Nginx Ingress Controller(s) on Host:Port
7681
- Plus API - the standard Nginx Plus API service that is running on the Nginx LB Server
82+
- Nginx Plus Go Client - software that communicates with the Nginx LB Server
7783
- Upstream - the IP:Port list of servers that Nginx will Load Balance traffic to at Layer 4 TCP using the stream configuration
7884

7985
<br/>
@@ -91,19 +97,19 @@ Preface - Define access parameters for NKL Controller to communicate with Nginx
9197

9298
1. Initialization:
9399
- Define the name of the target Upstream Server Block
94-
- "nginx-lb-http" or "nginx-lb-https" should be the default server block names, returns error if this does not exist
95-
- API query to NginxPlus LB server for current Upstream list
96-
- API query to K8s apiserver of list of Ingress Controller Endpoints
100+
- "nginx-lb-http" or "nginx-lb-https" should be the default server block names, returns error if these do not exist
101+
- Using the Nginx Plus Go Client library, make an API query to NginxPlus LB server for current Upstream list
102+
- API query to K8s apiserver for list of Ingress Controller Endpoints
97103
- Reconcile the two lists, making changes to Nginx Upstreams to match the Ingress Endpoints ( add / delete Upstreams as needed to converge the two lists )
98104

99105
2. Runtime:
100-
- Periodic check - API query for the list of Servers in the Upstream block, using the NginxPlus API ( query time TBD )
106+
- Periodic check - API query for the list of Servers in the Upstream block, using the NginxPlus API ( query interval TBD )
101107
- IP:port definition
102108
- other possible metadata: status, connections, response_time, etc
103109
- Keep a copy of this list in memory, if state is required
104110

105-
3. Modify Upstream server entries, based on K8s NodePort Service endpoint "Notification" changes
106-
- Register the LB Controller with the K8s watcher Service, subscribe to Notifications for changes to the nginx-ingress Service Endpoints.
111+
3. Register the LB Controller with the K8s watcher Service, subscribe to Notifications for changes to the nginx-ingress Service Endpoints.
112+
- Using the Nginx Plus Go Client libraries, modify Upstream server entries, based on K8s NodePort Service endpoint "Notification" changes
107113
- Add new Endpoint to Upstream Server list on k8s Notify
108114
- Remove deleted Endpoints to Upstream list, using the Nginx Plus "Drain" function, leaving existing TCP connections to close gracefully on K8s Notify delete.
109115
- Create and Set Drain_wait timer on Draining Upstream servers
@@ -118,15 +124,15 @@ Preface - Define access parameters for NKL Controller to communicate with Nginx
118124
- Calculate the difference in the list, and create new Nginx API calls to update the Upstream list, adding or removing the changes needed to mirror the nginx-ingress Service Endpoints list
119125
- Log these changes
120126

121-
6. Optional: Make Nginx API calls to update the entire Upstream list, regardless of what the existing list contains. *Not sure how NginxPlus responds when you try to add a duplicate server entry via the API - I believe it just fails with no effect to the existing server entry and established connections - needs to be tested*
127+
6. Optional: Make Nginx API calls to update the entire Upstream list, regardless of what the existing list contains. *Nginx will allow for the addition of duplicate server to the upstream block using the API, so at some point a process to "clean up and verify" the upstream list should be considered. It is possible that the Nginx-Plus-Go_Client already does this function.*
122128

123129
<br/>
124130

125-
## PM/PD Suggestion - to build this new Controller, use the existing Nginx Ingress Controller framework/code, to create this new k8s Controller, leveraging the Enterprise class, supportable code Nginx already has on hand.
131+
## PM/PD Suggestion - to build this new Controller, use the existing Nginx Ingress Controller framework/code, to create this new k8s LB Controller, leveraging the Enterprise class, supportable code Nginx already has on hand. Or perhaps, add this Loadbalancer solution as a new Feature to the exising Ingress Controller ( NIC, after all, is already watching the nginx-ingress namespace and services ).
126132

127133
<br/>
128134

129-
## Example Nginx Plus API request for Upstream block changes
135+
## Example Nginx Plus API requests for Upstream block changes
130136

131137
<br/>
132138

@@ -211,6 +217,8 @@ Nginx API: http://nginx.org/en/docs/http/ngx_http_api_module.html
211217

212218
Example: http://nginx.org/en/docs/http/ngx_http_api_module.html#example
213219

220+
Nginx Plus Go Client: https://github.com/nginxinc/nginx-plus-go-client
221+
214222
Nginx Upstream API examples: http://nginx.org/en/docs/http/ngx_http_api_module.html#stream_upstreams_stream_upstream_name_servers_stream_upstream_server_id
215223

216224
<br/>
@@ -223,31 +231,40 @@ Nginx Upstream API examples: http://nginx.org/en/docs/http/ngx_http_api_module.
223231
# TCP Proxy and load balancing block
224232
# Nginx Kubernetes Loadbalancer
225233
# backup servers allow Nginx to start
234+
# State file used to preserve config across restarts
226235
#
227236
#### nginxlb.conf
228237

229238
upstream nginx-lb-http {
230-
zone nginx_lb_http 256k;
239+
zone nginx-lb-http 256k;
231240
#placeholder
232-
server 1.1.1.1:32080 backup;
241+
#server 1.1.1.1:32080 backup;
242+
state /var/lib/nginx/state/nginx-lb-http.state;
233243
}
234244

235245
upstream nginx-lb-https {
236-
zone nginx_lb_https 256k;
246+
zone nginx-lb-https 256k;
237247
#placeholder
238-
server 1.1.1.1:32443 backup;
248+
#server 1.1.1.1:32443 backup;
249+
state /var/lib/nginx/state/nginx-lb-https.state;
239250
}
240251

241252
server {
242253
listen 80;
243-
status_zone nginx_lb_http;
254+
status_zone nginx-lb-http;
244255
proxy_pass nginx-lb-http;
245256
}
246257

247258
server {
248259
listen 443;
249-
status_zone nginx_lb_https;
260+
status_zone nginx-lb-https;
250261
proxy_pass nginx-lb-https;
251262
}
252263

253-
```
264+
265+
#Sample Nginx State for Upstreams
266+
# configuration file /var/lib/nginx/state/nginx-lb-http.state:
267+
server 1.1.1.1:32080 backup down;
268+
269+
# configuration file /var/lib/nginx/state/nginx-lb-https.state:
270+
server 1.1.1.1:30443 backup down;

docs/nginxlb.conf

+12-2
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,22 @@
33
# TCP Proxy and load balancing block
44
# Nginx Kubernetes Loadbalancer
55
### backup servers allow Nginx to start
6+
# State file used to preserve config across restarts
67
#
78
#### nginxlb.conf
89

910
upstream nginx-lb-http {
1011
zone nginx-lb-http 256k;
1112
#placeholder
12-
server 1.1.1.1:32080 backup;
13+
#server 1.1.1.1:32080 backup;
14+
state /var/lib/nginx/state/nginx-lb-http.state;
1315
}
1416

1517
upstream nginx-lb-https {
1618
zone nginx-lb-https 256k;
1719
#placeholder
18-
server 1.1.1.1:32443 backup;
20+
#server 1.1.1.1:32443 backup;
21+
state /var/lib/nginx/state/nginx-lb-https.state;
1922
}
2023

2124
server {
@@ -30,3 +33,10 @@
3033
proxy_pass nginx-lb-https;
3134
}
3235

36+
37+
#Sample Nginx State for Upstreams
38+
# configuration file /var/lib/nginx/state/nginx-lb-http.state:
39+
server 1.1.1.1:32080 backup down;
40+
41+
# configuration file /var/lib/nginx/state/nginx-lb-https.state:
42+
server 1.1.1.1:30443 backup down;

docs/nodeport-nkl.yaml

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# NKL Nodeport Service file
2+
# NodePort name must be in the format of
3+
# nkl-<upstream-block-name>
4+
# Chris Akker, Jan 2023
5+
#
6+
apiVersion: v1
7+
kind: Service
8+
metadata:
9+
name: nginx-ingress
10+
namespace: nginx-ingress
11+
spec:
12+
type: NodePort
13+
ports:
14+
- port: 80
15+
targetPort: 80
16+
protocol: TCP
17+
name: nkl-nginx-lb-http
18+
- port: 443
19+
targetPort: 443
20+
protocol: TCP
21+
name: nkl-nginx-lb-https
22+
selector:
23+
app: nginx-ingress
24+

docs/nodeport.yaml

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: nginx-ingress
5+
namespace: nginx-ingress
6+
spec:
7+
type: NodePort
8+
ports:
9+
- port: 80
10+
targetPort: 80
11+
protocol: TCP
12+
name: http
13+
- port: 443
14+
targetPort: 443
15+
protocol: TCP
16+
name: https
17+
selector:
18+
app: nginx-ingress

docs/udf-loadtests.md

+16-7
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,25 @@
1-
## WRK load tests from Ubuntu Jumphost
1+
## Quick WRK load tests from Ubuntu Jumphost
22
## to Nginx LB server
3-
## and direct to each k8s nodeport
3+
## and direct to each k8s node
44
## using WRK in a container
55

66
### 10.1.1.4 is the Nginx LB Server's IP addr
77

8+
<br/>
9+
810
docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.4/coffee
911
Running 2m test @ https://10.1.1.4/coffee
1012
4 threads and 50 connections
1113
Thread Stats Avg Stdev Max +/- Stdev
1214
Latency 19.73ms 11.26ms 172.76ms 81.04%
1315
Req/Sec 626.50 103.68 1.03k 75.60%
1416
299460 requests in 2.00m, 481.54MB read
15-
Requests/sec: 2493.52
17+
`Requests/sec: 2493.52`
1618
Transfer/sec: 4.01MB
1719

18-
## To knode1
20+
<br/>
21+
22+
## Direct to knode1
1923

2024
ubuntu@k8-jumphost:~$ docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.8:31269/coffee
2125
Running 2m test @ https://10.1.1.8:31269/coffee
@@ -24,10 +28,12 @@ Running 2m test @ https://10.1.1.8:31269/coffee
2428
Latency 17.87ms 10.63ms 151.45ms 80.16%
2529
Req/Sec 698.98 113.22 1.05k 75.67%
2630
334080 requests in 2.00m, 537.22MB read
27-
Requests/sec: 2782.35
31+
`Requests/sec: 2782.35`
2832
Transfer/sec: 4.47MB
2933

30-
## t0 knode2
34+
<br/>
35+
36+
## Direct to knode2
3137

3238
ubuntu@k8-jumphost:~$ docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.10:31269/coffee
3339
Running 2m test @ https://10.1.1.10:31269/coffee
@@ -36,6 +42,9 @@ Running 2m test @ https://10.1.1.10:31269/coffee
3642
Latency 17.62ms 10.01ms 170.99ms 80.32%
3743
Req/Sec 703.96 115.07 1.09k 74.17%
3844
336484 requests in 2.00m, 541.41MB read
39-
Requests/sec: 2801.89
45+
`Requests/sec: 2801.89`
4046
Transfer/sec: 4.51MB
4147

48+
<br/>
49+
50+
Note: Slight decrease in Proxy vs Direct.

0 commit comments

Comments
 (0)