Views:
Profile Applicability: Level 1
Allow Kubelet to manage iptables.
Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open.
Note
Note
See the Azure AKS documentation for the default value.

Impact

Kubelet would manage the iptables on the system and keep it in sync. If you are using any other iptables management solution, then there might be some conflicts.

Audit

Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for makeIPTablesUtilChains set to true.
  1. SSH to the relevant node and run the following command on each node to find the appropriate Kubelet config file:
    ps -ef | grep kubelet
    The output of the above command should return something similar to --config /etc/kubernetes/kubelet/kubelet-config.json which is the location of the Kubelet config file.
  2. Open the Kubelet config file:
    cat /etc/kubernetes/kubelet/kubelet-config.json
  3. Verify that if the makeIPTablesUtilChains argument exists then it is set to true.
  4. If the --make-iptables-util-chains argument does not exist, and there is a Kubelet config file specified by --config, verify that the file does not set makeIPTablesUtilChains to false.
Audit Method 2:
If using the API configz endpoint, search for the status of authentication... "makeIPTablesUtilChains":true by extracting the live configuration from the nodes running Kubelet.
  1. Set the local proxy port and the following variables, and provide the proxy port number and node name:
    • HOSTNAME_PORT="localhost-and-port-number"
    • NODE_NAME="The-Name-Of-Node-To-Extract-Configuration" from the output of "kubectl get nodes"
    kubectl proxy --port=8001 & 
    
    export HOSTNAME_PORT=localhost:8001 (example host and port number) 
    export NODE_NAME=ip-192.168.31.226.aks.internal (example node name 
    from "kubectl get nodes") 
    
    curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"

Remediation

Remediation Method 1:
Edit /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to false:
"makeIPTablesUtilChains": true
Remediation Method 2:
If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each worker node and add the below parameter at the end of the KUBELET_ARGS variable string:
--make-iptables-util-chains=true
Remediation Method 3:
If using the API configz endpoint, search for the status of "makeIPTablesUtilChains": true by extracting the live configuration from the nodes running kubelet:
See step-by-step configmap procedures in the Kubernetes documentation, and rerun the curl command from the audit process to check for kubelet configuration changes:
     kubectl proxy --port=8001 &
     
     export HOSTNAME_PORT=localhost:8001 (example host and port number) 
     export NODE_NAME=ip-192.168.31.226.aks.internal (example node name from 
     "kubectl get nodes")
     
     curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
    
For all three remediations:
Restart the kubelet service and check the status:
     systemctl daemon-reload
     systemctl restart kubelet.service
     systemctl status kubelet -l