Skip to content

Commit d55554b

Browse files
authored
Akker (#41)
* added icons * added icons * http upgrade * add grafana dashboard * updated diagram
1 parent eedb6de commit d55554b

22 files changed

+1861
-10
lines changed

docs/InstallationGuide.md

+55-10
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,15 @@
66

77
<br/>
88

9+
![Kubernetes](media/kubernetes-icon.png) | ![Nginx Plus](media/nginx-plus-icon.png) | ![NIC](media/nginx-ingress-icon.png)
10+
--- | --- | ---
11+
12+
<br/>
13+
914
## Pre-Requisites
1015

1116
- Working kubernetes cluster, with admin privleges
12-
- Running nginx-ingress controller, either OSS or Plus. This install guide follows the instructions for deploying an Nginx Ingress Controller here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
17+
- Running nginx-ingress controller, either OSS or Plus. This install guide followed the instructions for deploying an Nginx Ingress Controller here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
1318
- Demo application, this install guide uses the Nginx Cafe example, found here: https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
1419
- A bare metal Linux server or VM for the external LB Server, connected to a network external to the cluster. Two of these will be required if High Availability is needed, as shown here.
1520
- Nginx Plus software loaded on the LB Server(s). This install guide follows the instructions for installing Nginx Plus on Centos 7, located here: https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/
@@ -19,15 +24,29 @@
1924

2025
## Kubernetes Cluster
2126

27+
<br/>
28+
29+
![Kubernetes](media/kubernetes-icon.png)
30+
31+
<br/>
32+
2233
A standard K8s cluster is all that is required. There must be enough resources available to run the Nginx Ingress Controller, and the Nginx Kubernetes Loadbalancer Controller. You must have administrative access to be able to create the namespace, services, and deployments for this Solution. This Solution was tested on Kubernetes version 1.23. Most recent versions => v1.21 should work just fine.
2334

2435
<br/>
2536

2637
## Nginx Ingress Controller
2738

39+
<br/>
40+
41+
![NIC](media/nginx-ingress-icon.png)
42+
43+
<br/>
44+
2845
The Nginx Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster. The installation of the actual Ingress Controller is outside the scope of this installation guide, but we include the links to the docs for your reference. `The NIC installation must follow the documents exactly as written,` as this Solution refers to the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
2946

30-
NOTE: This Solution only works with nginx-ingress from Nginx. It will `not` work with the Community version of Ingress, called ingress-nginx. If you are unsure which Ingress Controller you are running, check out the blog on Nginx.com:
47+
NOTE: This Solution only works with nginx-ingress from Nginx. It will `not` work with the Community version of Ingress, called ingress-nginx.
48+
49+
If you are unsure which Ingress Controller you are running, check out the blog on Nginx.com:
3150
https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options
3251

3352

@@ -62,23 +81,35 @@ spec:
6281

6382
```
6483

84+
Apply the updated nodeport-nkl.yaml Manifest:
6585

6686
```bash
6787
kubectl apply -f nodeport-nkl.yaml
6888
```
6989

90+
<br/>
7091

92+
## Demo Application
7193

7294
<br/>
7395

74-
## Demo Application
96+
![Cafe Dashboard](media/cafe-dashboard.png)
7597

76-
This is not part of the actual Solution, but it is useful to have a well-known application running in the cluster, as a useful target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows, and application health check monitoring, to determine if the application is running in the cluster. If you choose a different Application to test with, the health checks provided here will NOT work, and will need to be modified to work correctly.
98+
<br/>
99+
100+
This is not part of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows, as well as application health check monitoring, to determine if the application is running in the cluster.
101+
102+
Note: If you choose a different Application to test with, `the Nginx health checks provided here will NOT work,` and will need to be modified to work correctly.
77103

78104
- Deploy the Nginx Cafe Demo application, found here:
79105

80106
https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
81107

108+
- The Cafe Demo Docker image used is an upgraded one, with graphics and additional Request and Response variables added.
109+
110+
https://hub.docker.com/r/nginxinc/ingress-demo
111+
You can use the `cafe.yaml` manifest included.
112+
82113
- Do not use the `cafe-ingress.yaml` file. Rather, use the `cafe-virtualserver.yaml` file that is provided here. It uses the Nginx CRDs to define a VirtualServer, and the related Routes and Redirects needed. The `redirects are required` for the LB Server's health checks to work correctly!
83114

84115
```yaml
@@ -143,21 +174,30 @@ spec:
143174
144175
## Linux VM or bare-metal LB Server
145176
146-
This is a standard Linux OS system, based on the Linux Distro and Technical Specs required for Nginx Plus, which can be found here: https://docs.nginx.com/nginx/technical-specs/
177+
![Linux](media/linux-icon.png)
178+
147179
148-
This installation guide followed the "Installation of Nginx Plus on Centos/Redhat/Oracle" steps for installing Nginx Plus.
180+
This is any standard Linux OS system, based on the Linux Distro and Technical Specs required for Nginx Plus, which can be found here: https://docs.nginx.com/nginx/technical-specs/
181+
182+
This Solution followed the "Installation of Nginx Plus on Centos/Redhat/Oracle" steps for installing Nginx Plus.
149183
150184
>NOTE: This solution will not work with Nginx OpenSource, as OpenSource does not have the API that is used in this Solution. Installation on unsupported Distros is not recommended or supported.
151185
152186
<br/>
153187
154188
## Nginx Plus LB Server
155189
190+
<br/>
191+
192+
![Nginx Red Plus](media/nginxredplus.png)
193+
194+
<br/>
195+
156196
This is the configuration required for the LB Server, external to the cluster. It must be configured for the following.
157197
158198
- Move the Nginx default Welcome page from port 80 to port 8080. Port 80 will be used by the stream context, instead of the http context.
159199
- API write access enabled on port 9000.
160-
- Plus Dashboard enabled, used for testing, monitoring, and visualization of the solution working.
200+
- Plus Dashboard enabled, used for testing, monitoring, and visualization of the Solution working.
161201
- The `Stream` context is enabled, for TCP loadbalancing.
162202
- Stream context is configured.
163203

@@ -202,13 +242,13 @@ server {
202242

203243
![NGINX Dashboard](media/nginxlb-dashboard.png)
204244

205-
- Create a new folder for the stream config .conf files. /etc/nginx/stream was used in this Solution.
245+
- Create a new folder for the stream config .conf files. /etc/nginx/stream is used in this Solution.
206246

207247
```bash
208248
mkdir /etc/nginx/stream
209249
```
210250

211-
- Create 2 new `STATE` files for Nginx. These are used to backup the configuration, in case Nginx restarts/reloads.
251+
- Create 2 new `STATE` files for Nginx. These are used to backup the Upstream configuration, in case Nginx is restarted/reloaded.
212252

213253
Nginx State Files Required for Upstreams
214254
- state file /var/lib/nginx/state/nginx-lb-http.state
@@ -289,7 +329,7 @@ stream {
289329

290330
`Notice that is uses Ports 80 and 443.`
291331

292-
Place this file in the /etc/nginx/stream folder.
332+
Place this file in the /etc/nginx/stream folder, and reload Nginx. Notice the match block and health check directives are for the cafe.example.com Demo application from Nginx.
293333

294334
```bash
295335
# NginxK8sLB Stream configuration, for L4 load balancing
@@ -338,6 +378,11 @@ stream {
338378

339379
<br/>
340380

381+
![NIC](media/nginx-ingress-icon.png)
382+
383+
<br/>
384+
385+
341386
This is the new Controller, which is configured to watch the k8s environment, the nginx-ingress Service object, and send API updates to the Nginx LB Server when there are changes. It only requires three things.
342387

343388
- New kubernetes namespace and RBAC

docs/cafe.yaml

+70
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: coffee
5+
spec:
6+
replicas: 3
7+
selector:
8+
matchLabels:
9+
app: coffee
10+
template:
11+
metadata:
12+
labels:
13+
app: coffee
14+
spec:
15+
containers:
16+
- name: coffee
17+
image: nginxinc/ingress-demo # upgraded Cafe Docker image
18+
ports:
19+
- containerPort: 80
20+
---
21+
apiVersion: v1
22+
kind: Service
23+
metadata:
24+
name: coffee-svc
25+
spec:
26+
type: ClusterIP
27+
clusterIP: None
28+
ports:
29+
- port: 80
30+
targetPort: 80
31+
protocol: TCP
32+
name: http
33+
selector:
34+
app: coffee
35+
---
36+
apiVersion: apps/v1
37+
kind: Deployment
38+
metadata:
39+
name: tea
40+
spec:
41+
replicas: 3
42+
selector:
43+
matchLabels:
44+
app: tea
45+
template:
46+
metadata:
47+
labels:
48+
app: tea
49+
spec:
50+
containers:
51+
- name: tea
52+
image: nginxinc/ingress-demo
53+
ports:
54+
- containerPort: 80
55+
---
56+
apiVersion: v1
57+
kind: Service
58+
metadata:
59+
name: tea-svc
60+
labels:
61+
spec:
62+
type: ClusterIP
63+
clusterIP: None
64+
ports:
65+
- port: 80
66+
targetPort: 80
67+
protocol: TCP
68+
name: http
69+
selector:
70+
app: tea

docs/http/clusters.conf

+129
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
# NginxK8sLB HTTP configuration, for L7 load balancing
2+
# Chris Akker, Apr 2023
3+
# HTTP Proxy and load balancing
4+
# 2 k8s Clusters LB with http split clients
5+
# Nginx Kubernetes Loadbalancer
6+
# Upstream servers managed by NKL Controller
7+
# Nginx Key Value store for Split ratios
8+
#
9+
#### clusters.conf
10+
11+
# Define Key Value store, backup state file, timeout, and enable sync
12+
13+
keyval_zone zone=split:1m state=/var/lib/nginx/state/split.keyval timeout=30d sync;
14+
keyval $host $split_level zone=split;
15+
16+
# Main Nginx Server Block for cafe.example.com, with TLS
17+
18+
server {
19+
listen 443 ssl;
20+
status_zone https://cafe.example.com;
21+
server_name cafe.example.com;
22+
23+
ssl_certificate /etc/ssl/nginx/default.crt;
24+
ssl_certificate_key /etc/ssl/nginx/default.key;
25+
26+
location / {
27+
status_zone /;
28+
29+
proxy_set_header Host $host;
30+
proxy_http_version 1.1;
31+
proxy_set_header "Connection" "";
32+
proxy_pass https://$upstream;
33+
34+
}
35+
36+
}
37+
38+
# Cluster1 upstreams
39+
40+
upstream cluster1-https {
41+
zone cluster1-https 256k;
42+
least_time last_byte;
43+
server 10.1.1.10:31317;
44+
server 10.1.1.8:31317;
45+
keepalive 16;
46+
#servers managed by NKL
47+
#state /var/lib/nginx/state/cluster1-https.state;
48+
}
49+
50+
# Cluster2 upstreams
51+
52+
upstream cluster2-https {
53+
zone cluster2-https 256k;
54+
least_time last_byte;
55+
server 10.1.1.11:31390;
56+
server 10.1.1.12:31390;
57+
#servers managed by NKL
58+
#state /var/lib/nginx/state/cluster2-https.state;
59+
}
60+
61+
# HTTP Split Clients Configuration for Cluster1/Cluster2 ratios
62+
63+
split_clients $request_id $split0 {
64+
* cluster2-https;
65+
}
66+
67+
split_clients $request_id $split1 {
68+
1.0% cluster1-https;
69+
* cluster2-https;
70+
}
71+
72+
split_clients $request_id $split5 {
73+
5.0% cluster1-https;
74+
* cluster2-https;
75+
}
76+
77+
split_clients $request_id $split10 {
78+
10% cluster1-https;
79+
* cluster2-https;
80+
}
81+
82+
split_clients $request_id $split25 {
83+
25% cluster1-https;
84+
* cluster2-https;
85+
}
86+
87+
split_clients $request_id $split50 {
88+
50% cluster1-https;
89+
* cluster2-https;
90+
}
91+
92+
split_clients $request_id $split75 {
93+
75% cluster1-https;
94+
* cluster2-https;
95+
}
96+
97+
split_clients $request_id $split90 {
98+
90% cluster1-https;
99+
* cluster2-https;
100+
}
101+
102+
split_clients $request_id $split95 {
103+
95% cluster1-https;
104+
* cluster2-https;
105+
}
106+
107+
split_clients $request_id $split99 {
108+
99% cluster1-https;
109+
* cluster2-https;
110+
}
111+
112+
split_clients $request_id $split100 {
113+
* cluster1-https;
114+
}
115+
116+
map $split_level $upstream {
117+
0 $split0;
118+
1.0 $split1;
119+
5.0 $split5;
120+
10 $split10;
121+
25 $split25;
122+
50 $split50;
123+
75 $split75;
124+
90 $split90;
125+
95 $split95;
126+
99 $split99;
127+
100 $split100;
128+
default $split50;
129+
}

0 commit comments

Comments
 (0)