How Different Types of Kubernetes Services Route Traffic to Pods

You may already know the major Service types in Kubernetes, and the main differences like “ClusterIP only has cluster-internal access”, “NodePort opens a port on every node” and “LoadBalancer is built on top of the former two types”.

But you may have questions such as what cluster-internal access “really” means; how routing happens all the way to a Pod, and so on. Let’s take a deeper look at what’s under the hood.

ClusterIP

ClusterIP is the default and most basic Service type. It exposes the Service on a cluster-internal IP address that is only reachable from within the cluster. This means:

  • A stable virtual IP (VIP) is assigned to the Service
  • The Service is only accessible within the Kubernetes cluster
  • Pods in the cluster can access the Service using the cluster IP or DNS name
  • External traffic cannot directly reach the Service

Under the hood:

  1. When a ClusterIP service is created, kube-proxy creates iptables rules
  2. These rules redirect traffic destined for the ClusterIP to the actual pod IPs
  3. Example iptables rules:

When you create a ClusterIP service, Kubernetes assigns a virtual IP to the Service, acting as a front of the actual backend pods. The kube-proxy watches the API for Services and Endpoints, then sets up a DNAT (Destination Network Address Translation) rule in the iptables nat table on all nodes, specifically in the KUBE-SERVICES chain.

In the cluster, if an application from a pod attempts to access the ClusterIP address, the packet goes through the iptables of that node. NAT translates it to the local or remote Pod IP transparently. This is why you can only “access pods within the cluster” for ClusterIP Service because it depends on the iptables on the node.

Setting Up a Test Cluster

We are using iptables as the default kube-proxy mode here. Deep dives to other modes, such as IPVS, are TBD.

Setup an EKS cluster using eksctl:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Create an EKS cluster
eksctl create cluster --name my-test --region us-west-2

# node group is created automatically by eksctl
aws eks update-nodegroup-config --cluster-name my-test --region us-west-2 \
--nodegroup-name ng-c6f91160 --scaling-config maxSize=10,desiredSize=10

# Verify all 10 nodes are ready
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-27-1.us-west-2.compute.internal Ready <none> 29s v1.32.9-eks-113cf36 192.168.27.1 34.221.46.208 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-32-206.us-west-2.compute.internal Ready <none> 28s v1.32.9-eks-113cf36 192.168.32.206 35.92.77.14 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-4-158.us-west-2.compute.internal Ready <none> 3m51s v1.32.9-eks-113cf36 192.168.4.158 34.216.196.182 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-40-246.us-west-2.compute.internal Ready <none> 31s v1.32.9-eks-113cf36 192.168.40.246 54.212.77.44 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-47-36.us-west-2.compute.internal Ready <none> 29s v1.32.9-eks-113cf36 192.168.47.36 34.220.91.6 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-50-162.us-west-2.compute.internal Ready <none> 28s v1.32.9-eks-113cf36 192.168.50.162 35.161.30.28 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-6-106.us-west-2.compute.internal Ready <none> 29s v1.32.9-eks-113cf36 192.168.6.106 54.191.113.135 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-66-111.us-west-2.compute.internal Ready <none> 29s v1.32.9-eks-113cf36 192.168.66.111 54.191.69.64 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-86-44.us-west-2.compute.internal Ready <none> 3m51s v1.32.9-eks-113cf36 192.168.86.44 35.91.244.226 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27
ip-192-168-94-254.us-west-2.compute.internal Ready <none> 28s v1.32.9-eks-113cf36 192.168.94.254 52.12.179.28 Amazon Linux 2023.9.20250929 6.1.153-175.280.amzn2023.x86_64 containerd://1.7.27

What happens when your applications within the cluster access the ClusterIP Service

Create a file nginx-deployment.yaml. We use pod affinity spec to create pods in different nodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: v1
kind: Namespace
metadata:
name: routing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: routing
spec:
selector:
matchLabels:
app: nginx
replicas: 6
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"

And a service-clusterip.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: routing-service
namespace: routing
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 8000
targetPort: 80

Deploy the yaml file and examine the pod status:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Create the Deployment
kubectl apply -f nginx-deployment.yaml
kubectl apply -f service-clusterip.yaml

# Let us see the Pod IP addresses, and verify that they are placed in different nodes
kubectl get pod -n routing -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5c8d8d657f-5pbx4 1/1 Running 0 33s 192.168.63.216 ip-192-168-47-36.us-west-2.compute.internal <none> <none>
nginx-deployment-5c8d8d657f-8g7m9 1/1 Running 0 33s 192.168.70.199 ip-192-168-94-254.us-west-2.compute.internal <none> <none>
nginx-deployment-5c8d8d657f-8sqsv 1/1 Running 0 33s 192.168.22.9 ip-192-168-27-1.us-west-2.compute.internal <none> <none>
nginx-deployment-5c8d8d657f-92vp6 1/1 Running 0 33s 192.168.66.148 ip-192-168-66-111.us-west-2.compute.internal <none> <none>
nginx-deployment-5c8d8d657f-qpjbm 1/1 Running 0 33s 192.168.53.235 ip-192-168-50-162.us-west-2.compute.internal <none> <none>
nginx-deployment-5c8d8d657f-sjb6j 1/1 Running 0 33s 192.168.11.88 ip-192-168-6-106.us-west-2.compute.internal <none> <none>

# Verify the Service is created. Its ClusterIP is 10.100.5.54.
kubectl get service -n routing
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
routing-service ClusterIP 10.100.5.54 <none> 8000/TCP 13s

Now let’s check the iptables in a node. Note, we have 6 pods and 10 nodes. I am intentionally going to a node where no nginx pod is deployed, such as ip-192-168-40-246.us-west-2.compute.internal, so that we will see how a request can be routed to the destination Service.

We can use sudo iptables -t nat -L -n -v to list everything the NAT table. There’s a KUBE-SERVICES chain that is created by kube-proxy. Let’s take a look at it.

1
2
3
4
5
6
7
8
9
10
11
sh-5.2$ sudo iptables -t nat -L KUBE-SERVICES -n -v --line-numbers | column -t
Chain KUBE-SERVICES (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.100.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
2 0 0 KUBE-SVC-I7SKRZYQ7PWYV5X7 tcp -- * * 0.0.0.0/0 10.100.22.105 /* kube-system/eks-extension-metrics-api:metrics-api cluster IP */ tcp dpt:443
3 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
4 0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
5 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
6 0 0 KUBE-SVC-Z4ANX4WAEWEBLCTM tcp -- * * 0.0.0.0/0 10.100.150.131 /* kube-system/metrics-server:https cluster IP */ tcp dpt:443
7 0 0 KUBE-SVC-SKUSM527VLBNHFCG tcp -- * * 0.0.0.0/0 10.100.5.54 /* routing/routing-service cluster IP */ tcp dpt:8000
8 40 2400 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

We can see that there is a target KUBE-SVC-SKUSM527VLBNHFCG pointing to our newly created service routing/routing-service, with the ClusterIP address 10.100.5.54.

Let’s further check the specific chain KUBE-SVC-SKUSM527VLBNHFCG

1
2
3
4
5
6
7
8
sh-5.2$ sudo iptables -t nat -L KUBE-SVC-SKUSM527VLBNHFCG -n -v --line-numbers | column -t
Chain KUBE-SVC-SKUSM527VLBNHFCG (1 references)num pkts bytes target prot opt in out source destination
1 0 0 KUBE-SEP-HSURKP7XCHIWJWR7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.11.88:80 */ statistic mode random probability 0.16666666651
2 0 0 KUBE-SEP-M7YLCNF5MYCSL4LE all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.22.9:80 */ statistic mode random probability 0.20000000019
3 0 0 KUBE-SEP-6P2Y5UELPXZYKNBO all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.53.235:80 */ statistic mode random probability 0.25000000000
4 0 0 KUBE-SEP-674KRGVPYHFZCI7M all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.63.216:80 */ statistic mode random probability 0.33333333349
5 0 0 KUBE-SEP-VMMEN3TWJZ5SZSSE all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.66.148:80 */ statistic mode random probability 0.50000000000
6 0 0 KUBE-SEP-I7YDRQL2RK5LXY3Z all -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service -> 192.168.70.199:80 */

We’ll see endpoint chains like KUBE-SEP-* representing the pods/endpoints. Then let’s see the specific pod/endpoint chain:

1
2
3
4
5
sh-5.2$ sudo iptables -t nat -L KUBE-SEP-HSURKP7XCHIWJWR7 -n -v --line-numbers | column -t
Chain KUBE-SEP-HSURKP7XCHIWJWR7 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.11.88 0.0.0.0/0 /* routing/routing-service */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service */ tcp to:192.168.11.88:80

Here, we will see the DNAT rule to the Pod IP 192.168.11.88. Similarly, the other endpoint chains starting with KUBE-SEP-* would also have DNAT of their associated Pod IPs.

NodePort

NodePort builds on top of ClusterIP by exposing the Service on each Node’s IP at a static port (the NodePort). This means:

  • The Service is accessible from outside the cluster using <NodeIP>:<NodePort>
  • A port is allocated in the range 30000-32767 (configurable)
  • The Service is still accessible within the cluster using ClusterIP
  • Traffic can reach the Service through any Node’s IP address

Under the hood:

  1. kube-proxy creates additional iptables rules for the NodePort
  2. External traffic hitting any Node’s IP on the NodePort is forwarded to the Service’s ClusterIP
  3. The traffic is then distributed to pods using the existing ClusterIP rules

What happens when your applications within the cluster access the NodePort Service

Create another service service-nodeport.yaml.

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: routing-service-np
namespace: routing
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 8000
targetPort: 80

We can see that the NodePort service has both ClusterIP and listening to an assigned port 30502 here.

1
2
3
4
5
6
7
kubectl apply -f service-nodeport.yaml

# List the Services
kubectl get service -n routing
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
routing-service ClusterIP 10.100.5.54 <none> 8000/TCP 7m43s
routing-service-np NodePort 10.100.161.147 <none> 8000:30502/TCP 6s

Since we have created another Service service-nodeport.yaml, we should see the new service under KUBE-SERVICES. There is a KUBE-NODEPORTS chain as the last target in the KUBE-SERVICES chain as we saw in above results. Let’s list the rules in KUBE-NODEPORTS chain.

1
2
3
4
5
6
7
8
9
10
11
sh-5.2$ sudo iptables -t nat -L KUBE-NODEPORTS -n | column -t
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-EXT-GA6B6UKIFOFCSJYD tcp -- 0.0.0.0/0 127.0.0.0/8 /* routing/routing-service-np */ tcp dpt:30502 nfacct-name localhost_nps_accepted_pkts
KUBE-EXT-GA6B6UKIFOFCSJYD tcp -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np */ tcp dpt:30502

sh-5.2$ sudo iptables -t nat -L KUBE-EXT-GA6B6UKIFOFCSJYD -n | column -t
Chain KUBE-EXT-GA6B6UKIFOFCSJYD (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 0.0.0.0/0 0.0.0.0/0 /* masquerade traffic for routing/routing-service-np external destinations */
KUBE-SVC-GA6B6UKIFOFCSJYD all -- 0.0.0.0/0 0.0.0.0/0

Let’s follow the KUBE-SVC-GA6B6UKIFOFCSJYD chain to further examine our service.

1
2
3
4
5
6
7
8
9
sh-5.2$ sudo iptables -t nat -L KUBE-SVC-GA6B6UKIFOFCSJYD -n -v --line-numbers | column -t
Chain KUBE-SVC-GA6B6UKIFOFCSJYD (2 references)
target prot opt source destination
KUBE-SEP-O7ZL65Q6F2I7XFSO all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.11.88:80 */ statistic mode random probability 0.16666666651
KUBE-SEP-K3OZODAFPOBVG7YA all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.22.9:80 */ statistic mode random probability 0.20000000019
KUBE-SEP-YBPM2SFBXPS3RMN3 all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.53.235:80 */ statistic mode random probability 0.25000000000
KUBE-SEP-LQPQ7JRZTE3RDSAG all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.63.216:80 */ statistic mode random probability 0.33333333349
KUBE-SEP-4B2IF36DOXHSN2CE all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.66.148:80 */ statistic mode random probability 0.50000000000
KUBE-SEP-CZ4WPNGGY2EUE5EL all -- 0.0.0.0/0 0.0.0.0/0 /* routing/routing-service-np -> 192.168.70.199:80 */

We can see that the KUBE-SEP-* chains under the KUBE-SVC- entry map to their corresponding pod IPs. Similar to what we have seen in ClusterIP above.

For packets destined to port 30502, originating either within or outside the cluster:

  1. the KUBE-MARK-MASQ rule marks the packet to be altered later in the POSTROUTING chain of the iptables, to use SNAT (source network address translation) to rewrite the source IP as the node IP - so that other hosts outside the pod network can reply back.
  2. the DNAT (Destination Network Address Translation) target will rewrite the destination to a pod IP - this is how the request is routed to an actual pod

It is worth calling out that even if you send a request to a specific <NodeIp>:<Port>, the final pod destination may not be in the specified instance. This is because of the DNAT process according to the entries in the iptables.

LoadBalancer

LoadBalancer builds on top of NodePort by exposing the Service externally through a cloud provider’s load balancer. This means:

  • The Service is accessible through a cloud provider’s load balancer IP
  • The load balancer distributes traffic across all nodes
  • The Service is still accessible through NodePort and ClusterIP
  • Automatically provisions and configures the cloud load balancer

Under the hood:

  1. The cloud provider’s controller creates a load balancer
  2. The load balancer forwards traffic to NodePort
  3. NodePort forwards to ClusterIP
  4. ClusterIP distributes to pods
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: routing-service-lb
namespace: routing
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 8000
targetPort: 80

We can see that the LoadBalancer service has a ClusterIP, listening to a port, and an external IP. Since this is a bare minimum cluster in AWS, the EXTERNAL-IP is created as a classic AWS Load Balancer. You can go to AWS console -> EC2 -> Load Balancer to find that load balancer. The targets behind the load balancer are the 10 instances listening to port 32697. Requests will be routed by <instance-id>:32697 which is how NodePort works.

1
2
3
4
5
6
7
kubectl apply -f service-loadbalancer.yaml

kubectl get service -n routing
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
routing-service ClusterIP 10.100.5.54 <none> 8000/TCP 17m
routing-service-lb LoadBalancer 10.100.6.86 ab57de7fef43f4321bd4ab0fc836c7c3-1354178293.us-west-2.elb.amazonaws.com 8000:32697/TCP 5s
routing-service-np NodePort 10.100.161.147 <none> 8000:30502/TCP 9m43s

If you use AWS Load Balancer Controller, Network Load Balancer (NLB) will be created for Service types. AWS Load Balancer Controller provides the flexibility to choose between instance or ip as target type (this flexibility also applies to Application Load Balancers for Ingress kinds).

  1. For instance type, AWS NLB registers instances/nodes as targets and routes in NodePort way
  2. For ip type, AWS NLB registers pod IPs as targets. It does not use NodePort under the hood. Instead, a request is directly routed to a pod, based on the routing algorithm configured in the NLB.

Note: LoadBalancer service type requires cloud provider integration. For bare metal or on-premises clusters, you’ll need additional solutions like MetalLB to provide load balancer functionality.

Service Types Comparison

Feature ClusterIP NodePort LoadBalancer
Accessibility Internal cluster only Internal + External via NodeIP:NodePort Internal + External via Load Balancer IP
IP Assignment Virtual ClusterIP only ClusterIP + NodePort on all nodes ClusterIP + NodePort + External LB IP
Port Range Any port 30000-32767 (default) Any port (LB) + NodePort
Use Case Internal microservices Development/testing, direct node access Production external services
Load Balancing kube-proxy (iptables/IPVS) kube-proxy + external routing Cloud LB + kube-proxy
Cloud Provider Required No No Yes (or MetalLB for bare metal)
Routing Flow Client → ClusterIP → Pod Client → NodeIP:Port → ClusterIP → Pod Client → LB → NodePort → ClusterIP → Pod
iptables Rules KUBE-SERVICES → KUBE-SVC-* → KUBE-SEP-* (DNAT to Pod) KUBE-NODEPORTS → KUBE-EXT-* → KUBE-SVC-* → KUBE-SEP-* (MASQ + DNAT) Same as NodePort (LB forwards to NodePort)

Key Takeaways:

  • ClusterIP provides internal service discovery and load balancing within the cluster using iptables DNAT rules
  • NodePort builds on ClusterIP by adding external accessibility through a static port on every node
  • LoadBalancer builds on NodePort by provisioning a cloud load balancer that distributes traffic across nodes
  • All three types ultimately rely on kube-proxy’s iptables rules to route traffic to the actual Pods
  • Traffic can land on any node and be routed to any Pod, regardless of Pod location

See Also

  1. https://kubernetes.io/docs/concepts/services-networking/service/
  2. https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
  3. https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/
  4. https://dustinspecker.com/posts/resolving-kubernetes-services-from-host-when-using-kind/
  5. https://dustinspecker.com/series/container-networking/
  6. https://ronaknathani.com/blog/2020/07/kubernetes-nodeport-and-iptables-rules/
  7. https://stackoverflow.com/questions/77034250/how-does-a-service-choose-which-pod-to-send-request-to
  8. https://www.reddit.com/r/kubernetes/comments/16a2us4/how_does_a_service_choose_which_pod_to_send/
  9. https://www.youtube.com/watch?v=uGm_A9qRCsk
  10. https://medium.com/@amroessameldin/kube-proxy-what-is-it-and-how-it-works-6def85d9bc8f