others-how to solve unexpected error storing fake SSL Cert error with ingress controller in k8s

Problem

When we setup kubernetes with ingress controller, sometimes we would encounter this error with the nginx ingress controller:

I1102 02:10:41.508038       6 flags.go:204] Watching for Ingress class: nginx
W1102 02:10:41.509204       6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1102 02:10:41.509323       6 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1102 02:10:41.509724       6 main.go:220] Creating API client for https://1.1.0.1:443
I1102 02:10:41.521473       6 main.go:264] Running in Kubernetes cluster version v1.18 (v1.18.3) - git (clean) commit aaa - platform linux/amd64
I1102 02:10:41.536133       6 main.go:94] Validated ingress-nginx/default-http-backend as the default backend.
F1102 02:10:41.969535       6 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

You can see that the error is:

unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

The nginx ingress controller is a daemonset in kubernetes, some hosts are ok, some hosts are down, why?

Environment

  • Rancher 2.4

Debug

Let’s dive into the normal ingress controller container:

k exec -it nginx-ingress-controller-dv657 -n ingress-nginx -- sh

And browse the files of the ingress controller:

/etc/nginx $ ls -l /etc/
drwxr-xr-x    1 www-data www-data      4096 May 12 17:18 ingress-controller
/etc/nginx $ ls -l /etc/ingress-controller/ssl
-rwx------    1 www-data www-data      2933 Oct 21 01:41 default-fake-certificate.pem

You can see that the default-fake-certificate.pem is owned by www-data.

And let’s show details of the nginx ingress controller daemonset:

➜  ~ k describe daemonset nginx-ingress-controller -n ingress-nginx

we got this:

Name:           nginx-ingress-controller
Selector:       app=ingress-nginx
Node-Selector:  <none>
Labels:         <none>
Annotations:    deprecated.daemonset.template.generation: 1
                field.cattle.io/publicEndpoints:
                  [{"nodeName":"c-8vrb9:m-1","addresses":["10.21.1.12"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-cont...
                kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"sp...
Desired Number of Nodes Scheduled: 5
Current Number of Nodes Scheduled: 5
Number of Nodes Scheduled with Up-to-date Pods: 5
Number of Nodes Scheduled with Available Pods: 5
Number of Nodes Misscheduled: 0
Pods Status:  5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=ingress-nginx
  Service Account:  nginx-ingress-serviceaccount
  Containers:
   nginx-ingress-controller:
    Image:       rancher/nginx-ingress-controller:nginx-0.32.0-rancher1
    Ports:       80/TCP, 443/TCP
    Host Ports:  80/TCP, 443/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      --configmap=$(POD_NAMESPACE)/nginx-configuration
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --annotations-prefix=nginx.ingress.kubernetes.io
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:            <none>
Events:               <none>

We should compare the disabled ingress controller with the normal one.

After some googling, we found that you can do follows:

Solution #1

  • Delete the docker image on the host to force the image to be refreshed
  • Or change the ingress controller pull image policy to Always

Solution #2

You can see that we are using 0.32.0 version of the nginx ingress controller, some posts say that this version has some security improvements in it, you can try to download your nginx ingress version to version 0.30.0