Description
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
Client version: 1.30.4 (git-v1.30.4)
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
Client Version: v1.29.10
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.11
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster --yes
5. What happened after the commands executed?
The access to cluster gets down as classic load balancer of aws's health check has failed after upgrade to the cluster of 1.30.
I believe SSL health check was working well until using 1.29 , after upgrade to the cluster 1.30 this kind of check has not worked any more.
6. What did you expect to happen?
I have to change back the health check of loadblancer manually to TCP check from SSL check.
9. Anything else do we need to know?
I'm not even sure how can I switch to NLB loadblancer without missing the whole cluster? is there any way if I would use that one?