๐ŸงชHands-On Practice Lab

ICA Practice Questions

46 comprehensive hands-on lab exercises covering all exam domains. Master Istio service mesh through practical scenarios.

๐Ÿ“˜ About This Practice Lab

This practice lab contains 46 hands-on questions designed to prepare you for the Istio Certified Associate (ICA) exam. Each question presents a realistic scenario you might encounter in the actual exam or in production environments.

Questions are organized by exam domains with accurate weight distribution. All commands have been verified for Istio 1.28.x and follow current best practices.

๐ŸŽฏ
Exam-Realistic Scenarios Task-based questions matching actual exam format
๐Ÿ”„
Independent Questions Complete any question in any order with full cleanup
โœ…
Verification Steps Expected results to confirm your solution is correct
๐Ÿ“š
Official References Direct links to Istio documentation for deeper learning

๐Ÿ–ฅ๏ธ Lab Environment Requirements

These exercises can be completed on any Kubernetes environment with Istio installed. Below are some recommended options:

โ˜๏ธ
Cloud Playgrounds KodeKloud, Killercoda, or Play with Kubernetes
๐Ÿ’ป
Local Clusters Minikube, kind, k3d, or Docker Desktop
๐Ÿข
Managed Kubernetes GKE, EKS, AKS with Istio addon or manual install
โš™๏ธ
Required Tools kubectl, istioctl (1.28.x), and cluster admin access
12
Installation (20%)
12
Traffic Mgmt (35%)
12
Security (25%)
10
Troubleshooting (20%)
Domain 1 โ€ข 20% of Exam โ€ข 12 Questions

Installation, Upgrade & Configuration

Master Istio installation methods, configuration profiles, sidecar injection, mesh configuration, and upgrade strategies.

1
Verify Istio Installation and Component Health
You have joined a new team and need to verify the current Istio installation. Check the installed version, verify all control plane components are healthy, and confirm the data plane proxies are in sync.
โ–ผ
๐Ÿ“‹ Prerequisites

Istio should be pre-installed on the cluster. Verify you have access to istioctl and kubectl.

bash
# Verify CLI tools are available
which istioctl
which kubectl

# Verify cluster access
kubectl cluster-info
๐ŸŽฏ Task
  • Task 1: Check the Istio version for client, control plane, and data plane
  • Task 2: Verify all Istio control plane pods are running in istio-system namespace
  • Task 3: Use istioctl proxy-status to verify all proxies are in SYNCED state
  • Task 4: Run istioctl analyze to check for configuration issues in the mesh
โœ… Solution
bash
# Task 1: Check Istio version (client, control plane, data plane)
istioctl version

# Task 2: Verify control plane pods are running
kubectl get pods -n istio-system

# Check that istiod is ready
kubectl get deployment istiod -n istio-system

# Task 3: Check proxy sync status
istioctl proxy-status

# Task 4: Analyze mesh configuration for issues
istioctl analyze --all-namespaces

๐Ÿ’ก Exam Tip

istioctl proxy-status columns: SYNCED means proxy config is current. STALE means proxy hasn't received updates. NOT SENT means istiod hasn't pushed config yet.

istioctl analyze finds misconfigurations like missing destinations, conflicting policies, or deprecated settings.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • istioctl version shows matching versions for client, control plane, and data plane
  • All pods in istio-system are in Running state with all containers ready
  • istioctl proxy-status shows all proxies as SYNCED for CDS, LDS, EDS, RDS
  • istioctl analyze reports no errors (warnings are acceptable)
๐Ÿงน Cleanup

This exercise is read-only verification. No resources were created, so no cleanup is required.

2
Enable Envoy Access Logging for Debugging
Your team is troubleshooting intermittent 503 errors in production. Enable Envoy access logging to capture request details. Configure JSON-formatted logging to stdout for all sidecar proxies and verify logs are being generated.
โ–ผ
๐Ÿ“‹ Prerequisites

Create a test namespace with sample applications to generate traffic for log verification.

bash
# Create namespace for testing
kubectl create namespace logging-test

# Enable sidecar injection
kubectl label namespace logging-test istio-injection=enabled

# Deploy httpbin (server) and sleep (client) applications
kubectl apply -n logging-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n logging-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

# Wait for pods to be ready
kubectl wait --for=condition=ready pod -l app=httpbin -n logging-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n logging-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Enable access logging by updating the Istio mesh configuration to log to /dev/stdout with JSON encoding
  • Task 2: Restart the test pods to pick up the new logging configuration
  • Task 3: Generate HTTP traffic from the sleep pod to httpbin service
  • Task 4: View the access logs from the httpbin pod's istio-proxy container and confirm JSON format
โœ… Solution
bash
# Task 1: Enable access logging via istioctl
istioctl install --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.accessLogEncoding=JSON -y

# Verify the configuration was applied
kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep accessLog

# Task 2: Restart pods to pick up new config
kubectl rollout restart deployment/httpbin -n logging-test
kubectl rollout restart deployment/sleep -n logging-test

# Wait for pods to be ready again
kubectl wait --for=condition=ready pod -l app=httpbin -n logging-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n logging-test --timeout=120s

# Task 3: Generate traffic
kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/ip
kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/headers
kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/status/200

# Task 4: View access logs (JSON format)
kubectl logs -n logging-test deploy/httpbin -c istio-proxy --tail=10

๐Ÿ’ก Exam Tip

Access logs are essential for debugging. Key fields in JSON logs:
โ€ข response_code - HTTP status (200, 503, etc.)
โ€ข upstream_cluster - Where traffic was routed
โ€ข duration - Request time in milliseconds
โ€ข response_flags - Envoy flags like UC (upstream connection failure), UF (upstream failure)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • ConfigMap istio shows accessLogFile: /dev/stdout and accessLogEncoding: JSON
  • Both pods show 2/2 ready after restart
  • curl commands return successful responses (IP address, headers, etc.)
  • Logs from istio-proxy show JSON entries with fields: authority, method, path, response_code, upstream_cluster
๐Ÿงน Cleanup

Remove all resources created and optionally disable access logging:

bash
# Delete the test namespace and all resources
kubectl delete namespace logging-test

# Verify namespace is deleted
kubectl get namespace logging-test 2>/dev/null || echo "Namespace deleted successfully"

# (Optional) Disable access logging - reinstall without accessLog settings
# istioctl install --set profile=default -y
3
Onboard Application Namespace to Service Mesh
A new microservice called "orders-api" needs to be onboarded to the Istio service mesh. Create the namespace, enable automatic sidecar injection, deploy the application, and verify it can communicate with other mesh services.
โ–ผ
๐Ÿ“‹ Prerequisites

Verify Istio's sidecar injector webhook is available and the control plane is healthy.

bash
# Verify sidecar injector webhook exists
kubectl get mutatingwebhookconfiguration | grep istio

# Verify istiod is running
kubectl get pods -n istio-system -l app=istiod
๐ŸŽฏ Task
  • Task 1: Create a namespace called orders
  • Task 2: Enable automatic sidecar injection for the namespace using the appropriate label
  • Task 3: Deploy a simple nginx deployment named orders-api with 1 replica in the namespace
  • Task 4: Verify the pod has 2 containers (application + istio-proxy sidecar)
  • Task 5: Confirm the sidecar is connected to the control plane using istioctl proxy-status
โœ… Solution
bash
# Task 1: Create namespace
kubectl create namespace orders

# Task 2: Enable sidecar injection
kubectl label namespace orders istio-injection=enabled

# Verify label was applied
kubectl get namespace orders --show-labels

# Task 3: Deploy orders-api application
kubectl create deployment orders-api --image=nginx:1.24 -n orders

# Wait for pod to be ready
kubectl wait --for=condition=ready pod -l app=orders-api -n orders --timeout=120s

# Task 4: Verify pod has 2 containers
kubectl get pods -n orders

# List container names in the pod
kubectl get pod -l app=orders-api -n orders -o jsonpath='{.items[0].spec.containers[*].name}'
echo ""

# Verify istio-proxy container is present
kubectl get pod -l app=orders-api -n orders -o jsonpath='{range .items[0].spec.containers[*]}{.name}{": "}{.image}{"\n"}{end}'

# Task 5: Check proxy is synced with control plane
istioctl proxy-status | grep orders

๐Ÿ’ก Exam Tip

Two ways to enable injection:
โ€ข istio-injection=enabled - Uses default Istio revision
โ€ข istio.io/rev=<revision> - Uses specific revision (canary upgrades)

The sidecar injector adds: istio-init (init container for iptables) and istio-proxy (Envoy sidecar)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Namespace orders exists with label istio-injection=enabled
  • Pod shows 2/2 in READY column
  • Container names output: orders-api istio-proxy
  • istioctl proxy-status shows the orders-api pod as SYNCED
๐Ÿงน Cleanup

Remove all resources created during this exercise:

bash
# Delete the namespace (removes deployment, pods, and namespace)
kubectl delete namespace orders

# Verify cleanup
kubectl get namespace orders 2>/dev/null || echo "Namespace deleted successfully"

# Verify no pods remain
kubectl get pods -n orders 2>/dev/null || echo "All pods cleaned up"
4
Exclude a Pod from Sidecar Injection
Your team is deploying a legacy monitoring agent that is incompatible with the Istio sidecar. Deploy this pod in an injection-enabled namespace but prevent the sidecar from being injected using pod annotations.
โ–ผ
๐Ÿ“‹ Prerequisites

Create a namespace with sidecar injection enabled to test selective exclusion.

bash
# Create namespace with injection enabled
kubectl create namespace monitoring
kubectl label namespace monitoring istio-injection=enabled
kubectl get namespace monitoring --show-labels
๐ŸŽฏ Task
  • Task 1: Create a pod named legacy-agent with annotation sidecar.istio.io/inject: "false"
  • Task 2: Create another pod named metrics-collector WITHOUT the annotation (should get sidecar)
  • Task 3: Verify legacy-agent has only 1 container (no sidecar)
  • Task 4: Verify metrics-collector has 2 containers (with sidecar)
โœ… Solution
bash
# Task 1: Create pod WITH injection disabled
kubectl run legacy-agent --image=nginx:1.24 -n monitoring \
  --overrides='{"metadata":{"annotations":{"sidecar.istio.io/inject":"false"}}}'

# Task 2: Create pod WITHOUT annotation (will get sidecar)
kubectl run metrics-collector --image=nginx:1.24 -n monitoring

# Wait for pods
kubectl wait --for=condition=ready pod/legacy-agent -n monitoring --timeout=60s
kubectl wait --for=condition=ready pod/metrics-collector -n monitoring --timeout=60s

# Task 3 & 4: Verify container counts
kubectl get pods -n monitoring

# Check legacy-agent containers (should be 1)
kubectl get pod legacy-agent -n monitoring -o jsonpath='{.spec.containers[*].name}'
echo ""

# Check metrics-collector containers (should be 2)
kubectl get pod metrics-collector -n monitoring -o jsonpath='{.spec.containers[*].name}'
echo ""

๐Ÿ’ก Exam Tip

The annotation sidecar.istio.io/inject: "false" overrides namespace-level injection. Use for legacy apps, jobs that need clean termination, or infrastructure pods.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • legacy-agent shows 1/1 READY (no sidecar)
  • metrics-collector shows 2/2 READY (has sidecar)
  • legacy-agent containers: legacy-agent only
  • metrics-collector containers: metrics-collector istio-proxy
๐Ÿงน Cleanup
bash
kubectl delete namespace monitoring
kubectl get namespace monitoring 2>/dev/null || echo "Namespace deleted"
5
Restrict Outbound Traffic to Registered Services Only
For security compliance, configure the mesh to block traffic to external services unless explicitly registered via ServiceEntry. Set outbound traffic policy to REGISTRY_ONLY.
โ–ผ
๐Ÿ“‹ Prerequisites

Deploy a test application to verify outbound traffic behavior.

bash
# Create test namespace
kubectl create namespace egress-test
kubectl label namespace egress-test istio-injection=enabled

# Deploy sleep app
kubectl apply -n egress-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=sleep -n egress-test --timeout=120s

# Test current access (should work with default ALLOW_ANY)
kubectl exec -n egress-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 | head -1
๐ŸŽฏ Task
  • Task 1: Check the current outbound traffic policy setting
  • Task 2: Configure mesh to use REGISTRY_ONLY outbound policy
  • Task 3: Restart test pod and verify external traffic is blocked
โœ… Solution
bash
# Task 1: Check current policy
kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep outboundTrafficPolicy || echo "Using default (ALLOW_ANY)"

# Task 2: Set REGISTRY_ONLY policy
istioctl install --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY -y

# Verify setting
kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep -A1 outboundTrafficPolicy

# Task 3: Restart pod and test
kubectl rollout restart deployment/sleep -n egress-test
kubectl wait --for=condition=ready pod -l app=sleep -n egress-test --timeout=120s

# Test external access (should fail)
kubectl exec -n egress-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 2>&1 | head -3 || echo "Blocked as expected"

๐Ÿ’ก Exam Tip

ALLOW_ANY: Access any external service (default)
REGISTRY_ONLY: Only registered services (ServiceEntry) accessible

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • ConfigMap shows outboundTrafficPolicy: mode: REGISTRY_ONLY
  • External curl to httpbin.org fails or times out
๐Ÿงน Cleanup
bash
kubectl delete namespace egress-test
istioctl install --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY -y
๐Ÿ“š References
6
Manually Inject Sidecar Using istioctl
Add a workload to the mesh in a namespace where automatic injection is not enabled. Use istioctl kube-inject to manually inject the sidecar into the deployment manifest.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace WITHOUT injection label
kubectl create namespace manual-inject
kubectl get namespace manual-inject --show-labels
๐ŸŽฏ Task
  • Task 1: Create a deployment YAML file for nginx
  • Task 2: Use istioctl kube-inject to inject the sidecar
  • Task 3: Apply the injected manifest
  • Task 4: Verify pod has 2 containers despite no namespace injection
โœ… Solution
bash
# Task 1: Create deployment manifest
cat <<EOF > nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-manual
  namespace: manual-inject
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-manual
  template:
    metadata:
      labels:
        app: nginx-manual
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80
EOF

# Task 2: Inject sidecar
istioctl kube-inject -f nginx-deploy.yaml > nginx-deploy-injected.yaml

# Task 3: Apply injected manifest
kubectl apply -f nginx-deploy-injected.yaml
kubectl wait --for=condition=ready pod -l app=nginx-manual -n manual-inject --timeout=120s

# Task 4: Verify sidecar
kubectl get pods -n manual-inject
kubectl get pod -l app=nginx-manual -n manual-inject -o jsonpath='{.items[0].spec.containers[*].name}'
echo ""

# Verify proxy sync
istioctl proxy-status | grep nginx-manual

๐Ÿ’ก Exam Tip

One-liner alternative: kubectl apply -f <(istioctl kube-inject -f deploy.yaml)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Namespace has NO istio-injection label
  • Pod shows 2/2 READY
  • Containers: nginx istio-proxy
  • proxy-status shows SYNCED
๐Ÿงน Cleanup
bash
kubectl delete namespace manual-inject
rm -f nginx-deploy.yaml nginx-deploy-injected.yaml
๐Ÿ“š References
7
Configure Sidecar Proxy Resource Limits
Configure default CPU and memory limits for sidecar proxies: requests 100m CPU/128Mi memory, limits 500m CPU/256Mi memory.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace resource-test
kubectl label namespace resource-test istio-injection=enabled
๐ŸŽฏ Task
  • Task 1: Configure proxy resource requests: 100m CPU, 128Mi memory
  • Task 2: Configure proxy resource limits: 500m CPU, 256Mi memory
  • Task 3: Deploy test pod and verify resources
โœ… Solution
bash
# Task 1 & 2: Configure proxy resources
istioctl install \
  --set values.global.proxy.resources.requests.cpu=100m \
  --set values.global.proxy.resources.requests.memory=128Mi \
  --set values.global.proxy.resources.limits.cpu=500m \
  --set values.global.proxy.resources.limits.memory=256Mi \
  -y

# Task 3: Deploy test pod
kubectl run resource-check --image=nginx:1.24 -n resource-test
kubectl wait --for=condition=ready pod/resource-check -n resource-test --timeout=120s

# Verify resources
kubectl get pod resource-check -n resource-test -o jsonpath='{.spec.containers[?(@.name=="istio-proxy")].resources}' | jq .

๐Ÿ’ก Exam Tip

Per-pod override via annotations: sidecar.istio.io/proxyCPU, sidecar.istio.io/proxyMemory

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • istio-proxy requests: cpu=100m, memory=128Mi
  • istio-proxy limits: cpu=500m, memory=256Mi
๐Ÿงน Cleanup
bash
kubectl delete namespace resource-test
istioctl install --set profile=default -y
๐Ÿ“š References
8
Register External Service Using ServiceEntry
With REGISTRY_ONLY mode enabled, create a ServiceEntry to allow mesh workloads to access httpbin.org external API.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Set REGISTRY_ONLY and deploy test client
istioctl install --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY -y

kubectl create namespace external-test
kubectl label namespace external-test istio-injection=enabled
kubectl apply -n external-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=sleep -n external-test --timeout=120s

# Verify blocked
kubectl exec -n external-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 2>&1 | head -2 || echo "Blocked"
๐ŸŽฏ Task
  • Task 1: Create ServiceEntry httpbin-ext for httpbin.org (HTTPS/443)
  • Task 2: Verify external access now works
โœ… Solution
bash
# Task 1: Create ServiceEntry
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
  name: httpbin-ext
  namespace: external-test
spec:
  hosts:
  - httpbin.org
  ports:
  - number: 443
    name: https
    protocol: TLS
  resolution: DNS
  location: MESH_EXTERNAL
EOF

# Verify ServiceEntry
kubectl get serviceentry -n external-test

# Task 2: Test access
kubectl exec -n external-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 10 | head -2

๐Ÿ’ก Exam Tip

ServiceEntry key fields: hosts, ports, resolution: DNS, location: MESH_EXTERNAL

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • ServiceEntry httpbin-ext created
  • curl returns HTTP/2 200
๐Ÿงน Cleanup
bash
kubectl delete namespace external-test
istioctl install --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY -y
๐Ÿ“š References
9
Configure Application to Wait for Sidecar Proxy
Enable holdApplicationUntilProxyStarts to ensure application containers wait for the Envoy sidecar to be ready before starting.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace proxy-wait
kubectl label namespace proxy-wait istio-injection=enabled
๐ŸŽฏ Task
  • Task 1: Enable holdApplicationUntilProxyStarts: true globally
  • Task 2: Deploy a test pod and verify the setting
  • Task 3: (Alternative) Show per-pod annotation method
โœ… Solution
bash
# Task 1: Enable globally
istioctl install --set meshConfig.defaultConfig.holdApplicationUntilProxyStarts=true -y

# Verify
kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep holdApplication

# Task 2: Deploy test pod
kubectl run wait-test --image=nginx:1.24 -n proxy-wait
kubectl wait --for=condition=ready pod/wait-test -n proxy-wait --timeout=120s
kubectl get pods -n proxy-wait

# Task 3: Per-pod annotation alternative
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: wait-annotation
  namespace: proxy-wait
  annotations:
    proxy.istio.io/config: '{"holdApplicationUntilProxyStarts": true}'
spec:
  containers:
  - name: app
    image: nginx:1.24
EOF

๐Ÿ’ก Exam Tip

Global: meshConfig.defaultConfig.holdApplicationUntilProxyStarts
Per-pod: proxy.istio.io/config annotation

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • ConfigMap shows holdApplicationUntilProxyStarts: true
  • Pods start successfully with 2/2 READY
๐Ÿงน Cleanup
bash
kubectl delete namespace proxy-wait
istioctl install --set profile=default -y
๐Ÿ“š References
10
Limit Sidecar Proxy Scope with Sidecar Resource
Create a Sidecar resource to limit a namespace's proxy configuration to only the services it needs, reducing memory usage.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create frontend and backend namespaces
kubectl create namespace frontend
kubectl label namespace frontend istio-injection=enabled

kubectl create namespace backend
kubectl label namespace backend istio-injection=enabled

# Deploy backend service
kubectl create deployment backend-api --image=nginx:1.24 -n backend
kubectl expose deployment backend-api --port=80 -n backend
kubectl wait --for=condition=ready pod -l app=backend-api -n backend --timeout=120s

# Deploy frontend client
kubectl apply -n frontend -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=sleep -n frontend --timeout=120s
๐ŸŽฏ Task
  • Task 1: Check current proxy cluster configuration
  • Task 2: Create Sidecar resource limiting egress to backend and istio-system only
  • Task 3: Verify reduced proxy config scope
โœ… Solution
bash
# Task 1: Check current clusters
SLEEP_POD=$(kubectl get pod -l app=sleep -n frontend -o jsonpath='{.items[0].metadata.name}')
istioctl proxy-config clusters $SLEEP_POD -n frontend | head -15

# Task 2: Create Sidecar resource
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
  name: frontend-sidecar
  namespace: frontend
spec:
  egress:
  - hosts:
    - "./*"
    - "backend/*"
    - "istio-system/*"
EOF

sleep 5

# Task 3: Verify reduced scope
istioctl proxy-config clusters $SLEEP_POD -n frontend | head -15

# Test connectivity still works
kubectl exec -n frontend deploy/sleep -- curl -s backend-api.backend:80 | head -3

๐Ÿ’ก Exam Tip

Sidecar egress hosts: ./* (same ns), namespace/* (specific ns). Always include istio-system/*.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • After Sidecar: proxy-config shows only frontend, backend, istio-system clusters
  • Connectivity to backend-api still works
๐Ÿงน Cleanup
bash
kubectl delete namespace frontend backend
๐Ÿ“š References
11
Install Istio with Demo Profile
Install Istio using the demo profile for a training environment. Verify all components including egress gateway are deployed.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl cluster-info
kubectl get pods -n istio-system
๐ŸŽฏ Task
  • Task 1: Install Istio with demo profile
  • Task 2: Verify istiod, ingressgateway, and egressgateway are running
โœ… Solution
bash
# Task 1: Install demo profile
istioctl install --set profile=demo -y

# Task 2: Verify components
kubectl get pods -n istio-system
kubectl get deployments -n istio-system

# Verify specific components
kubectl get deployment istiod -n istio-system
kubectl get deployment istio-ingressgateway -n istio-system
kubectl get deployment istio-egressgateway -n istio-system

istioctl version

๐Ÿ’ก Exam Tip

demo: istiod + ingress + egress + high trace sampling
default: istiod + ingress only (production)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • istiod: READY 1/1
  • istio-ingressgateway: READY 1/1
  • istio-egressgateway: READY 1/1
๐Ÿงน Cleanup
bash
istioctl install --set profile=default -y
๐Ÿ“š References
12
Uninstall Istio Completely
Remove Istio from the cluster completely, including all components, CRDs, and configuration.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Verify Istio is installed
kubectl get pods -n istio-system
istioctl version
๐ŸŽฏ Task
  • Task 1: Uninstall Istio using istioctl uninstall --purge
  • Task 2: Delete istio-system namespace
  • Task 3: Remove Istio CRDs
  • Task 4: Verify complete removal
โœ… Solution
bash
# Task 1: Uninstall Istio
istioctl uninstall --purge -y

# Task 2: Delete namespace
kubectl delete namespace istio-system

# Task 3: Remove CRDs
kubectl get crd | grep istio.io | awk '{print $1}' | xargs kubectl delete crd

# Task 4: Verify removal
kubectl get namespace istio-system 2>/dev/null || echo "Namespace: REMOVED"
kubectl get crd | grep istio || echo "CRDs: REMOVED"
kubectl get mutatingwebhookconfiguration | grep istio || echo "Webhooks: REMOVED"

โš ๏ธ Important

Before production uninstall: remove injection labels, restart workloads to remove sidecars, backup configurations.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • istio-system namespace: does not exist
  • Istio CRDs: none remaining
  • Istio webhooks: none remaining
๐Ÿงน Cleanup

Re-install Istio for subsequent exercises:

bash
istioctl install --set profile=default -y
kubectl get pods -n istio-system
๐Ÿ“š References
โš™๏ธ
Domain 1 Complete: Installation, Upgrade & Configuration
Q1Verify Installation & Component Health
Q2Enable Access Logging (meshConfig)
Q3Namespace Sidecar Injection
Q4Exclude Pod from Injection
Q5Outbound Traffic Policy (REGISTRY_ONLY)
Q6Manual Injection (kube-inject)
Q7Proxy Resource Limits
Q8ServiceEntry for External Services
Q9holdApplicationUntilProxyStarts
Q10Sidecar Resource Scope
Q11Demo Profile Installation
Q12Uninstall Istio
Domain 2 โ€ข 35% of Exam โ€ข 12 Questions

Traffic Management

Gateway, VirtualService, DestinationRule, traffic shifting, fault injection, and more.

13
Expose Service via Istio Ingress Gateway
Your team has deployed a web application that needs to be accessible from outside the cluster. Configure a Gateway and VirtualService to route external traffic to the application.
โ–ผ
๐Ÿ“‹ Prerequisites

Deploy a sample application to expose via the ingress gateway.

bash
# Create namespace
kubectl create namespace webapp
kubectl label namespace webapp istio-injection=enabled

# Deploy httpbin as sample app
kubectl apply -n webapp -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n webapp --timeout=120s

# Get ingress gateway IP
kubectl get svc istio-ingressgateway -n istio-system
๐ŸŽฏ Task
  • Task 1: Create a Gateway named webapp-gateway listening on port 80 for host webapp.example.com
  • Task 2: Create a VirtualService named webapp-vs that routes traffic from the gateway to the httpbin service
  • Task 3: Test the configuration using curl with Host header
โœ… Solution
bash
# Task 1: Create Gateway
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
  name: webapp-gateway
  namespace: webapp
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "webapp.example.com"
EOF

# Task 2: Create VirtualService
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: webapp-vs
  namespace: webapp
spec:
  hosts:
  - "webapp.example.com"
  gateways:
  - webapp-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Verify resources
kubectl get gateway,virtualservice -n webapp

# Task 3: Get ingress IP and test
INGRESS_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# If no external IP (e.g., minikube), use NodePort:
[ -z "$INGRESS_IP" ] && INGRESS_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
INGRESS_PORT=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

# Test with Host header
curl -s -H "Host: webapp.example.com" "http://${INGRESS_IP}:${INGRESS_PORT:-80}/headers" | head -20

๐Ÿ’ก Exam Tip

Gateway: Configures the load balancer (ports, hosts, TLS)
VirtualService: Defines routing rules (must reference the gateway)
The selector: istio: ingressgateway binds to the default ingress gateway.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Gateway webapp-gateway created
  • VirtualService webapp-vs created
  • curl returns httpbin response with headers
๐Ÿงน Cleanup
bash
kubectl delete namespace webapp
14
Configure Traffic Splitting for Canary Deployment
You're rolling out a new version of a service. Configure traffic splitting to send 90% of traffic to v1 and 10% to v2 for canary testing.
โ–ผ
๐Ÿ“‹ Prerequisites

Deploy two versions of a service with version labels.

bash
# Create namespace
kubectl create namespace canary-test
kubectl label namespace canary-test istio-injection=enabled

# Deploy v1
kubectl apply -n canary-test -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
      - name: myapp
        image: hashicorp/http-echo
        args: ["-text=v1"]
        ports:
        - containerPort: 5678
EOF

# Deploy v2
kubectl apply -n canary-test -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
      - name: myapp
        image: hashicorp/http-echo
        args: ["-text=v2"]
        ports:
        - containerPort: 5678
EOF

# Create service (selects both versions)
kubectl apply -n canary-test -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 5678
EOF

# Deploy test client
kubectl apply -n canary-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=sleep -n canary-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a DestinationRule with subsets for v1 and v2
  • Task 2: Create a VirtualService with 90/10 traffic split
  • Task 3: Test and verify the traffic distribution
โœ… Solution
bash
# Task 1: Create DestinationRule with subsets
kubectl apply -n canary-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: myapp-dr
spec:
  host: myapp
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF

# Task 2: Create VirtualService with 90/10 split
kubectl apply -n canary-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: myapp-vs
spec:
  hosts:
  - myapp
  http:
  - route:
    - destination:
        host: myapp
        subset: v1
      weight: 90
    - destination:
        host: myapp
        subset: v2
      weight: 10
EOF

# Verify resources
kubectl get destinationrule,virtualservice -n canary-test

# Task 3: Test traffic distribution (run 20 requests)
echo "Testing traffic split (20 requests)..."
for i in {1..20}; do
  kubectl exec -n canary-test deploy/sleep -- curl -s myapp:80
done | sort | uniq -c

๐Ÿ’ก Exam Tip

DestinationRule subsets: Define named groups of pods using labels
VirtualService weight: Percentage of traffic (must sum to 100)
Traffic split requires BOTH DestinationRule (subsets) AND VirtualService (weights).

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • DestinationRule with v1 and v2 subsets created
  • VirtualService with 90/10 weights created
  • ~18 requests return "v1", ~2 requests return "v2"
๐Ÿงน Cleanup
bash
kubectl delete namespace canary-test
15
Inject Fault Delay for Resilience Testing
To test how your application handles slow dependencies, inject a 5-second delay into 50% of requests to a specific service.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace and deploy httpbin
kubectl create namespace fault-test
kubectl label namespace fault-test istio-injection=enabled

kubectl apply -n fault-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n fault-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n fault-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n fault-test --timeout=120s

# Test baseline response time
echo "Baseline response time:"
kubectl exec -n fault-test deploy/sleep -- time curl -s httpbin:8000/get -o /dev/null
๐ŸŽฏ Task
  • Task 1: Create a VirtualService that injects a 5s delay for 50% of requests to httpbin
  • Task 2: Test multiple requests and observe the delayed responses
โœ… Solution
bash
# Task 1: Create VirtualService with fault delay
kubectl apply -n fault-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-delay
spec:
  hosts:
  - httpbin
  http:
  - fault:
      delay:
        percentage:
          value: 50
        fixedDelay: 5s
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Verify
kubectl get virtualservice -n fault-test

# Task 2: Test multiple requests (observe ~50% with 5s delay)
echo "Testing fault injection (5 requests)..."
for i in {1..5}; do
  echo "Request $i:"
  kubectl exec -n fault-test deploy/sleep -- sh -c 'time curl -s httpbin:8000/get -o /dev/null 2>&1 | grep real'
done

๐Ÿ’ก Exam Tip

Fault delay simulates network latency or slow services.
fixedDelay: Exact delay duration
percentage.value: % of requests affected (0-100)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • VirtualService with fault.delay created
  • ~50% of requests take ~5 seconds
  • ~50% of requests respond quickly (normal)
๐Ÿงน Cleanup
bash
kubectl delete namespace fault-test
๐Ÿ“š References
16
Inject HTTP Abort Fault for Error Handling Testing
Test your application's error handling by injecting HTTP 503 errors for 100% of requests to a specific endpoint.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace abort-test
kubectl label namespace abort-test istio-injection=enabled

kubectl apply -n abort-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n abort-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n abort-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n abort-test --timeout=120s

# Verify baseline works
kubectl exec -n abort-test deploy/sleep -- curl -sI httpbin:8000/status/200 | head -1
๐ŸŽฏ Task
  • Task 1: Create a VirtualService that returns HTTP 503 for 100% of requests
  • Task 2: Test and verify all requests receive 503 errors
โœ… Solution
bash
# Task 1: Create VirtualService with abort fault
kubectl apply -n abort-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-abort
spec:
  hosts:
  - httpbin
  http:
  - fault:
      abort:
        percentage:
          value: 100
        httpStatus: 503
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Task 2: Test (all requests should return 503)
echo "Testing abort fault injection..."
for i in {1..3}; do
  kubectl exec -n abort-test deploy/sleep -- curl -sI httpbin:8000/get | head -1
done

๐Ÿ’ก Exam Tip

fault.abort returns an error without calling the service.
Common test codes: 400, 403, 404, 500, 502, 503
Use for testing circuit breakers and retry logic.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • VirtualService with fault.abort created
  • All requests return HTTP/1.1 503 Service Unavailable
๐Ÿงน Cleanup
bash
kubectl delete namespace abort-test
๐Ÿ“š References
17
Configure Request Timeout
Prevent requests from hanging indefinitely by configuring a 3-second timeout for a service. Requests exceeding this duration should be terminated.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace timeout-test
kubectl label namespace timeout-test istio-injection=enabled

kubectl apply -n timeout-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n timeout-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n timeout-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n timeout-test --timeout=120s

# Test slow endpoint (5s delay) - should work without timeout
echo "Testing 5s delay endpoint (no timeout configured):"
kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/delay/5 -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n"
๐ŸŽฏ Task
  • Task 1: Create a VirtualService with a 3-second timeout for httpbin
  • Task 2: Test with a request that takes 5 seconds (should timeout)
  • Task 3: Test with a fast request (should succeed)
โœ… Solution
bash
# Task 1: Create VirtualService with 3s timeout
kubectl apply -n timeout-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-timeout
spec:
  hosts:
  - httpbin
  http:
  - timeout: 3s
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Task 2: Test slow endpoint (5s) - should timeout after 3s
echo "Testing 5s delay with 3s timeout (should fail):"
kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/delay/5 -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n"

# Task 3: Test fast endpoint - should succeed
echo "Testing fast endpoint (should succeed):"
kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n"

๐Ÿ’ก Exam Tip

timeout applies to the entire request duration.
Timed-out requests return 504 Gateway Timeout.
Always set timeouts in production to prevent resource exhaustion.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • 5s delay request returns 504 after ~3 seconds
  • Fast request returns 200 quickly
๐Ÿงน Cleanup
bash
kubectl delete namespace timeout-test
๐Ÿ“š References
18
Configure Automatic Retries
Improve service reliability by configuring automatic retries. Set up 3 retry attempts for 5xx errors with a per-retry timeout.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace retry-test
kubectl label namespace retry-test istio-injection=enabled

kubectl apply -n retry-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n retry-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n retry-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n retry-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a VirtualService with retry policy: 3 attempts, 2s per-try timeout, retry on 5xx errors
  • Task 2: Verify retry configuration
โœ… Solution
bash
# Task 1: Create VirtualService with retries
kubectl apply -n retry-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-retry
spec:
  hosts:
  - httpbin
  http:
  - retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx,reset,connect-failure,retriable-4xx
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Verify configuration
kubectl get virtualservice httpbin-retry -n retry-test -o yaml | grep -A5 retries

# Task 2: Test with normal request
kubectl exec -n retry-test deploy/sleep -- curl -s httpbin:8000/get | head -5

# Test with 500 error endpoint (Istio will retry)
echo "Testing 500 endpoint (retries will occur but still fail):"
kubectl exec -n retry-test deploy/sleep -- curl -sI httpbin:8000/status/500 | head -1

๐Ÿ’ก Exam Tip

attempts: Max retry count (not including original request)
perTryTimeout: Timeout for each attempt
retryOn: Conditions to trigger retry (5xx, reset, connect-failure, etc.)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • VirtualService shows retry config with attempts: 3
  • Normal requests succeed
  • 500 errors still fail (but retries were attempted)
๐Ÿงน Cleanup
bash
kubectl delete namespace retry-test
๐Ÿ“š References
19
Configure Circuit Breaker with Connection Pool
Protect your service from being overwhelmed by configuring circuit breaker settings. Limit concurrent connections and pending requests.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace circuit-test
kubectl label namespace circuit-test istio-injection=enabled

kubectl apply -n circuit-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n circuit-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n circuit-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n circuit-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a DestinationRule with circuit breaker: max 1 connection, max 1 pending request
  • Task 2: Test by sending concurrent requests to trigger the circuit breaker
โœ… Solution
bash
# Task 1: Create DestinationRule with circuit breaker
kubectl apply -n circuit-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-cb
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        http2MaxRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutive5xxErrors: 3
      interval: 30s
      baseEjectionTime: 30s
      maxEjectionPercent: 100
EOF

# Verify configuration
kubectl get destinationrule httpbin-cb -n circuit-test -o yaml | grep -A10 connectionPool

# Task 2: Test with concurrent requests (using fortio if available, or simple loop)
echo "Testing circuit breaker with rapid requests..."
for i in {1..10}; do
  kubectl exec -n circuit-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" &
done
wait

๐Ÿ’ก Exam Tip

connectionPool: Limits connections and requests
outlierDetection: Ejects unhealthy hosts (true circuit breaker)
When limits exceeded: 503 with flag UO (upstream overflow)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • DestinationRule with connectionPool settings created
  • Some requests return 503 when circuit breaker trips
๐Ÿงน Cleanup
bash
kubectl delete namespace circuit-test
๐Ÿ“š References
20
Configure Traffic Mirroring
Test a new version of your service with production traffic without impacting users. Mirror 100% of traffic from v1 to v2 while users only see v1 responses.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace mirror-test
kubectl label namespace mirror-test istio-injection=enabled

# Deploy v1 and v2 httpbin instances
kubectl apply -n mirror-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

# Create v2 deployment
kubectl apply -n mirror-test -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v2
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - name: httpbin
        image: docker.io/kong/httpbin
        ports:
        - containerPort: 80
EOF

# Label original deployment as v1
kubectl patch deployment httpbin -n mirror-test --type merge -p '{"spec":{"template":{"metadata":{"labels":{"version":"v1"}}}}}'

kubectl apply -n mirror-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n mirror-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n mirror-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create DestinationRule with v1 and v2 subsets
  • Task 2: Create VirtualService that routes to v1 and mirrors to v2
  • Task 3: Generate traffic and verify mirroring in v2 logs
โœ… Solution
bash
# Task 1: Create DestinationRule
kubectl apply -n mirror-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-dr
spec:
  host: httpbin
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF

# Task 2: Create VirtualService with mirroring
kubectl apply -n mirror-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-mirror
spec:
  hosts:
  - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
    mirror:
      host: httpbin
      subset: v2
    mirrorPercentage:
      value: 100
EOF

# Task 3: Generate traffic
echo "Generating traffic..."
for i in {1..5}; do
  kubectl exec -n mirror-test deploy/sleep -- curl -s httpbin:8000/headers -o /dev/null
done

# Check v2 logs for mirrored requests
echo "Checking v2 logs for mirrored traffic:"
kubectl logs -n mirror-test deploy/httpbin-v2 --tail=10

๐Ÿ’ก Exam Tip

mirror: Destination for mirrored traffic
mirrorPercentage: % of traffic to mirror (default 100%)
Mirrored requests are fire-and-forget (responses ignored).

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • VirtualService with mirror configuration created
  • Users get responses from v1 only
  • v2 logs show incoming mirrored requests
๐Ÿงน Cleanup
bash
kubectl delete namespace mirror-test
๐Ÿ“š References
21
Route Traffic Based on HTTP Headers
Implement header-based routing to direct requests with a specific header to a different version. Route requests with header x-user-type: premium to v2, all others to v1.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace header-test
kubectl label namespace header-test istio-injection=enabled

# Deploy two versions
kubectl apply -n header-test -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
      - name: app
        image: hashicorp/http-echo
        args: ["-text=v1-standard"]
        ports:
        - containerPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
      - name: app
        image: hashicorp/http-echo
        args: ["-text=v2-premium"]
        ports:
        - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 5678
EOF

kubectl apply -n header-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=myapp -n header-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n header-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create DestinationRule with v1 and v2 subsets
  • Task 2: Create VirtualService with header-based match: x-user-type: premium โ†’ v2, default โ†’ v1
  • Task 3: Test with and without the header
โœ… Solution
bash
# Task 1: Create DestinationRule
kubectl apply -n header-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: myapp-dr
spec:
  host: myapp
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF

# Task 2: Create VirtualService with header match
kubectl apply -n header-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: myapp-vs
spec:
  hosts:
  - myapp
  http:
  - match:
    - headers:
        x-user-type:
          exact: premium
    route:
    - destination:
        host: myapp
        subset: v2
  - route:
    - destination:
        host: myapp
        subset: v1
EOF

# Task 3: Test routing
echo "Request WITHOUT header (should get v1):"
kubectl exec -n header-test deploy/sleep -- curl -s myapp:80

echo ""
echo "Request WITH premium header (should get v2):"
kubectl exec -n header-test deploy/sleep -- curl -s -H "x-user-type: premium" myapp:80

๐Ÿ’ก Exam Tip

Header match types: exact, prefix, regex
Match rules are evaluated in order - first match wins.
Always have a default route at the end (no match condition).

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Request without header returns: v1-standard
  • Request with x-user-type: premium returns: v2-premium
๐Ÿงน Cleanup
bash
kubectl delete namespace header-test
๐Ÿ“š References
22
Route Traffic Based on URI Path
Route requests to different services based on the URI path. Send /api/v1/* to service-v1 and /api/v2/* to service-v2.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace uri-test
kubectl label namespace uri-test istio-injection=enabled

# Deploy httpbin as backend
kubectl apply -n uri-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n uri-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n uri-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n uri-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a VirtualService with URI prefix matching
  • Task 2: Configure path rewrite so /api/v1/status becomes /status
  • Task 3: Test the URI-based routing
โœ… Solution
bash
# Task 1 & 2: Create VirtualService with URI match and rewrite
kubectl apply -n uri-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-uri
spec:
  hosts:
  - httpbin
  http:
  - match:
    - uri:
        prefix: /api/v1/
    rewrite:
      uri: /
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
  - match:
    - uri:
        prefix: /api/v2/
    rewrite:
      uri: /
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Task 3: Test URI routing
echo "Testing /api/v1/get (rewrites to /get):"
kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/api/v1/get | head -5

echo ""
echo "Testing /api/v2/headers (rewrites to /headers):"
kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/api/v2/headers | head -5

echo ""
echo "Testing /get directly:"
kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/get | head -5

๐Ÿ’ก Exam Tip

URI match types: exact, prefix, regex
rewrite.uri: Replace matched path before sending to destination
Prefix /api/v1/ with rewrite / maps /api/v1/get โ†’ /get

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • /api/v1/get returns httpbin /get response
  • /api/v2/headers returns httpbin /headers response
  • Path rewriting works correctly
๐Ÿงน Cleanup
bash
kubectl delete namespace uri-test
๐Ÿ“š References
23
Configure Load Balancing Algorithm
Change the default load balancing algorithm from round-robin to least connections for a service that has varying request processing times.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace lb-test
kubectl label namespace lb-test istio-injection=enabled

kubectl apply -n lb-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n lb-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n lb-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n lb-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a DestinationRule with LEAST_REQUEST load balancing
  • Task 2: Verify the configuration
โœ… Solution
bash
# Task 1: Create DestinationRule with load balancing
kubectl apply -n lb-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-lb
spec:
  host: httpbin
  trafficPolicy:
    loadBalancer:
      simple: LEAST_REQUEST
EOF

# Task 2: Verify configuration
kubectl get destinationrule httpbin-lb -n lb-test -o yaml | grep -A3 loadBalancer

# Test requests
for i in {1..5}; do
  kubectl exec -n lb-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "Request $i: %{http_code}\n"
done

๐Ÿ’ก Exam Tip

Load balancing algorithms:
โ€ข ROUND_ROBIN (default): Rotate through endpoints
โ€ข LEAST_REQUEST: Send to endpoint with fewest active requests
โ€ข RANDOM: Random endpoint selection
โ€ข PASSTHROUGH: Direct connection (no load balancing)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • DestinationRule shows simple: LEAST_REQUEST
  • All requests return 200
๐Ÿงน Cleanup
bash
kubectl delete namespace lb-test
๐Ÿ“š References
24
Configure Sticky Sessions with Consistent Hashing
For a stateful application, configure session affinity so requests from the same user always go to the same backend pod using consistent hash based on a header.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace sticky-test
kubectl label namespace sticky-test istio-injection=enabled

# Deploy multiple replicas
kubectl apply -n sticky-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl scale deployment httpbin -n sticky-test --replicas=3

kubectl apply -n sticky-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n sticky-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n sticky-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create DestinationRule with consistent hash based on x-user-id header
  • Task 2: Test that same user-id always routes to same pod
โœ… Solution
bash
# Task 1: Create DestinationRule with consistent hash
kubectl apply -n sticky-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-sticky
spec:
  host: httpbin
  trafficPolicy:
    loadBalancer:
      consistentHash:
        httpHeaderName: x-user-id
EOF

# Task 2: Test sticky sessions
echo "Requests with x-user-id: user-123 (should hit same pod):"
for i in {1..3}; do
  kubectl exec -n sticky-test deploy/sleep -- curl -s -H "x-user-id: user-123" httpbin:8000/headers | grep -i "pod\|host"
done

echo ""
echo "Requests with x-user-id: user-456 (may hit different pod):"
for i in {1..3}; do
  kubectl exec -n sticky-test deploy/sleep -- curl -s -H "x-user-id: user-456" httpbin:8000/headers | grep -i "pod\|host"
done

๐Ÿ’ก Exam Tip

Consistent hash options:
โ€ข httpHeaderName: Hash based on header value
โ€ข httpCookie: Hash based on cookie
โ€ข useSourceIp: Hash based on client IP
โ€ข httpQueryParameterName: Hash based on query param

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Same x-user-id consistently routes to same backend
  • Different user-ids may route to different backends
๐Ÿงน Cleanup
bash
kubectl delete namespace sticky-test
๐Ÿ“š References
๐Ÿ”€
Domain 2 Complete: Traffic Management
Q13Gateway & VirtualService for Ingress
Q14Traffic Splitting (Canary)
Q15Fault Injection - Delay
Q16Fault Injection - Abort
Q17Request Timeout
Q18Automatic Retries
Q19Circuit Breaker
Q20Traffic Mirroring
Q21Header-Based Routing
Q22URI Path Routing & Rewrite
Q23Load Balancing Algorithm
Q24Sticky Sessions (Consistent Hash)
Domain 3 โ€ข 20% of Exam โ€ข 12 Questions

Securing Workloads

mTLS, PeerAuthentication, AuthorizationPolicy, JWT, Gateway TLS.

25
Enable Strict mTLS for a Namespace
Your security team requires all traffic within the payments namespace to be encrypted. Configure strict mutual TLS to reject any plaintext traffic.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace with sidecar injection
kubectl create namespace payments
kubectl label namespace payments istio-injection=enabled

# Deploy test services
kubectl apply -n payments -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n payments -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n payments --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n payments --timeout=120s

# Create a non-mesh client (no sidecar) in default namespace
kubectl run curl-no-mesh --image=curlimages/curl --command -- sleep 3600
kubectl wait --for=condition=ready pod/curl-no-mesh --timeout=60s
๐ŸŽฏ Task
  • Task 1: Create a PeerAuthentication policy named payments-strict that enforces STRICT mTLS for the payments namespace
  • Task 2: Verify mesh clients (with sidecar) can still communicate
  • Task 3: Verify non-mesh clients (without sidecar) are rejected
โœ… Solution
bash
# Task 1: Create PeerAuthentication with STRICT mTLS
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: payments-strict
  namespace: payments
spec:
  mtls:
    mode: STRICT
EOF

# Verify PeerAuthentication
kubectl get peerauthentication -n payments

# Task 2: Test from mesh client (should succeed)
echo "Testing from mesh client (with sidecar):"
kubectl exec -n payments deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

# Task 3: Test from non-mesh client (should fail)
echo "Testing from non-mesh client (no sidecar):"
kubectl exec curl-no-mesh -- curl -s httpbin.payments:8000/get --max-time 5 -o /dev/null -w "%{http_code}\n" 2>&1 || echo "Connection rejected (expected)"

๐Ÿ’ก Exam Tip

mTLS modes:
โ€ข PERMISSIVE (default): Accept both mTLS and plaintext
โ€ข STRICT: Only accept mTLS connections
โ€ข DISABLE: Only accept plaintext
โ€ข UNSET: Inherit from parent scope

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • PeerAuthentication payments-strict created with mode: STRICT
  • Mesh client returns 200
  • Non-mesh client connection fails/times out
๐Ÿงน Cleanup
bash
kubectl delete namespace payments
kubectl delete pod curl-no-mesh
26
Enable Mesh-Wide Strict mTLS
Implement a zero-trust security model by enabling strict mTLS across the entire service mesh. All services must communicate using mutual TLS.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create test namespaces
kubectl create namespace app-a
kubectl create namespace app-b
kubectl label namespace app-a istio-injection=enabled
kubectl label namespace app-b istio-injection=enabled

# Deploy test services
kubectl apply -n app-a -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl apply -n app-b -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

kubectl wait --for=condition=ready pod -l app=sleep -n app-a --timeout=120s
kubectl wait --for=condition=ready pod -l app=httpbin -n app-b --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a mesh-wide PeerAuthentication policy in istio-system namespace
  • Task 2: Verify cross-namespace communication still works with mTLS
  • Task 3: Check the mTLS status using istioctl
โœ… Solution
bash
# Task 1: Create mesh-wide PeerAuthentication
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT
EOF

# Verify
kubectl get peerauthentication -n istio-system

# Task 2: Test cross-namespace communication
echo "Testing cross-namespace communication:"
kubectl exec -n app-a deploy/sleep -- curl -s httpbin.app-b:8000/get -o /dev/null -w "%{http_code}\n"

# Task 3: Check mTLS status
echo "Checking mTLS status:"
istioctl x describe pod $(kubectl get pod -n app-b -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -n app-b | grep -i tls

๐Ÿ’ก Exam Tip

Mesh-wide policy must be in istio-system namespace with name default.
Hierarchy: Workload > Namespace > Mesh-wide
More specific policies override broader ones.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • PeerAuthentication in istio-system with name default
  • Cross-namespace request returns 200
  • istioctl shows mTLS is enforced
๐Ÿงน Cleanup
bash
kubectl delete peerauthentication default -n istio-system
kubectl delete namespace app-a app-b
๐Ÿ“š References
27
Disable mTLS for Specific Port
A legacy health check system cannot use mTLS. Configure PeerAuthentication to use STRICT mTLS but disable it for the health check port 8080.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace port-mtls
kubectl label namespace port-mtls istio-injection=enabled

kubectl apply -n port-mtls -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n port-mtls -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n port-mtls --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n port-mtls --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create PeerAuthentication with STRICT mTLS but DISABLE for port 8080
  • Task 2: Verify the port-level override configuration
โœ… Solution
bash
# Task 1: Create PeerAuthentication with port exception
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: httpbin-port-exception
  namespace: port-mtls
spec:
  selector:
    matchLabels:
      app: httpbin
  mtls:
    mode: STRICT
  portLevelMtls:
    8080:
      mode: DISABLE
EOF

# Task 2: Verify configuration
kubectl get peerauthentication -n port-mtls -o yaml | grep -A5 portLevelMtls

# Test main port (8000 - STRICT mTLS)
echo "Testing port 8000 (STRICT mTLS):"
kubectl exec -n port-mtls deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

portLevelMtls allows different mTLS modes per port.
Use cases: health checks, metrics endpoints, legacy integrations.
The selector targets specific workloads.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • PeerAuthentication shows portLevelMtls with 8080: DISABLE
  • Main port communication works with mTLS
๐Ÿงน Cleanup
bash
kubectl delete namespace port-mtls
๐Ÿ“š References
28
Create DENY Authorization Policy
Block all traffic to a sensitive service by default. Create an AuthorizationPolicy that denies all requests to the database service.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace authz-deny
kubectl label namespace authz-deny istio-injection=enabled

# Deploy "database" service (using httpbin)
kubectl apply -n authz-deny -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: database
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
    spec:
      containers:
      - name: database
        image: docker.io/kong/httpbin
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  selector:
    app: database
  ports:
  - port: 80
EOF

kubectl apply -n authz-deny -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=database -n authz-deny --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n authz-deny --timeout=120s

# Verify access works before policy
echo "Before policy - should succeed:"
kubectl exec -n authz-deny deploy/sleep -- curl -s database/get -o /dev/null -w "%{http_code}\n"
๐ŸŽฏ Task
  • Task 1: Create an AuthorizationPolicy that denies ALL requests to the database service
  • Task 2: Verify requests are now blocked with 403 Forbidden
โœ… Solution
bash
# Task 1: Create DENY policy
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: deny-all-database
  namespace: authz-deny
spec:
  selector:
    matchLabels:
      app: database
  action: DENY
  rules:
  - {}
EOF

# Task 2: Verify access is denied
echo "After DENY policy - should return 403:"
kubectl exec -n authz-deny deploy/sleep -- curl -s database/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

action: DENY with empty rules - {} matches ALL requests.
DENY policies are evaluated before ALLOW policies.
Blocked requests return 403 Forbidden with RBAC: access denied.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • AuthorizationPolicy with action: DENY created
  • Requests return 403 Forbidden
๐Ÿงน Cleanup
bash
kubectl delete namespace authz-deny
๐Ÿ“š References
29
Create ALLOW Authorization Policy Based on Source
Allow only specific services to access the backend API. Create an AuthorizationPolicy that only allows requests from the frontend service account.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace authz-allow
kubectl label namespace authz-allow istio-injection=enabled

# Create service accounts
kubectl create serviceaccount frontend -n authz-allow
kubectl create serviceaccount other -n authz-allow

# Deploy backend API
kubectl apply -n authz-allow -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

# Deploy frontend client (with frontend SA)
kubectl apply -n authz-allow -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      serviceAccountName: frontend
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3600"]
EOF

# Deploy other client (with other SA)
kubectl apply -n authz-allow -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: other-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: other-client
  template:
    metadata:
      labels:
        app: other-client
    spec:
      serviceAccountName: other
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3600"]
EOF

kubectl wait --for=condition=ready pod -l app=httpbin -n authz-allow --timeout=120s
kubectl wait --for=condition=ready pod -l app=frontend -n authz-allow --timeout=120s
kubectl wait --for=condition=ready pod -l app=other-client -n authz-allow --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create AuthorizationPolicy allowing only frontend service account to access httpbin
  • Task 2: Verify frontend can access httpbin
  • Task 3: Verify other-client is denied
โœ… Solution
bash
# Task 1: Create ALLOW policy for frontend SA
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: allow-frontend-only
  namespace: authz-allow
spec:
  selector:
    matchLabels:
      app: httpbin
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/authz-allow/sa/frontend"]
EOF

# Task 2: Test from frontend (should succeed)
echo "Testing from frontend SA:"
kubectl exec -n authz-allow deploy/frontend -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

# Task 3: Test from other-client (should fail)
echo "Testing from other SA:"
kubectl exec -n authz-allow deploy/other-client -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

Service account principal format: cluster.local/ns/{namespace}/sa/{service-account}
When ANY ALLOW policy exists, requests not matching any ALLOW rule are denied.
Use principals: ["*"] to allow any authenticated identity.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Frontend returns 200
  • Other-client returns 403
๐Ÿงน Cleanup
bash
kubectl delete namespace authz-allow
30
Create Authorization Policy Based on HTTP Method and Path
Implement fine-grained access control. Allow GET requests to /get from anyone, but restrict POST requests to /post to admin service account only.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace authz-http
kubectl label namespace authz-http istio-injection=enabled

kubectl create serviceaccount admin -n authz-http
kubectl create serviceaccount user -n authz-http

# Deploy API server
kubectl apply -n authz-http -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

# Deploy admin client
kubectl apply -n authz-http -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: admin-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: admin-client
  template:
    metadata:
      labels:
        app: admin-client
    spec:
      serviceAccountName: admin
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3600"]
EOF

# Deploy user client
kubectl apply -n authz-http -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: user-client
  template:
    metadata:
      labels:
        app: user-client
    spec:
      serviceAccountName: user
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3600"]
EOF

kubectl wait --for=condition=ready pod -l app=httpbin -n authz-http --timeout=120s
kubectl wait --for=condition=ready pod -l app=admin-client -n authz-http --timeout=120s
kubectl wait --for=condition=ready pod -l app=user-client -n authz-http --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create AuthorizationPolicy allowing GET on /get from anyone
  • Task 2: Allow POST on /post only from admin SA
  • Task 3: Test the policies
โœ… Solution
bash
# Task 1 & 2: Create combined AuthorizationPolicy
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: api-access
  namespace: authz-http
spec:
  selector:
    matchLabels:
      app: httpbin
  action: ALLOW
  rules:
  - to:
    - operation:
        methods: ["GET"]
        paths: ["/get", "/headers", "/ip"]
  - from:
    - source:
        principals: ["cluster.local/ns/authz-http/sa/admin"]
    to:
    - operation:
        methods: ["POST"]
        paths: ["/post"]
EOF

# Task 3: Test the policies
echo "User GET /get (should succeed):"
kubectl exec -n authz-http deploy/user-client -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

echo "User POST /post (should fail):"
kubectl exec -n authz-http deploy/user-client -- curl -s -X POST httpbin:8000/post -o /dev/null -w "%{http_code}\n"

echo "Admin POST /post (should succeed):"
kubectl exec -n authz-http deploy/admin-client -- curl -s -X POST httpbin:8000/post -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

to.operation matches request attributes: methods, paths, hosts, ports
from.source matches caller attributes: principals, namespaces, ipBlocks
Multiple rules in same policy = OR logic. Conditions within rule = AND logic.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • User GET /get: 200
  • User POST /post: 403
  • Admin POST /post: 200
๐Ÿงน Cleanup
bash
kubectl delete namespace authz-http
๐Ÿ“š References
31
Configure JWT Authentication with RequestAuthentication
Secure your API with JWT authentication. Configure RequestAuthentication to validate JWTs from a specific issuer and require valid tokens for access.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace jwt-test
kubectl label namespace jwt-test istio-injection=enabled

kubectl apply -n jwt-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n jwt-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n jwt-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n jwt-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a RequestAuthentication that validates JWTs from issuer testing@secure.istio.io
  • Task 2: Create an AuthorizationPolicy that requires valid JWT
  • Task 3: Test with and without a valid JWT
โœ… Solution
bash
# Task 1: Create RequestAuthentication
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: RequestAuthentication
metadata:
  name: jwt-auth
  namespace: jwt-test
spec:
  selector:
    matchLabels:
      app: httpbin
  jwtRules:
  - issuer: "testing@secure.istio.io"
    jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.22/security/tools/jwt/samples/jwks.json"
EOF

# Task 2: Create AuthorizationPolicy requiring JWT
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: require-jwt
  namespace: jwt-test
spec:
  selector:
    matchLabels:
      app: httpbin
  action: ALLOW
  rules:
  - from:
    - source:
        requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]
EOF

# Get sample token
TOKEN=$(curl -s https://raw.githubusercontent.com/istio/istio/release-1.22/security/tools/jwt/samples/demo.jwt)

# Task 3: Test without JWT (should fail)
echo "Request without JWT:"
kubectl exec -n jwt-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

# Test with valid JWT (should succeed)
echo "Request with valid JWT:"
kubectl exec -n jwt-test deploy/sleep -- curl -s -H "Authorization: Bearer $TOKEN" httpbin:8000/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

RequestAuthentication: Validates JWT format and signature
AuthorizationPolicy with requestPrincipals: Requires valid JWT
RequestAuthentication alone only rejects INVALID tokens, not missing ones.
Principal format: {issuer}/{subject}

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Request without JWT: 403
  • Request with valid JWT: 200
๐Ÿงน Cleanup
bash
kubectl delete namespace jwt-test
32
Configure TLS on Istio Ingress Gateway
Secure external traffic to your application by configuring TLS termination on the Istio ingress gateway using a certificate and key.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace and deploy app
kubectl create namespace tls-test
kubectl label namespace tls-test istio-injection=enabled

kubectl apply -n tls-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n tls-test --timeout=120s

# Generate self-signed certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout /tmp/tls.key -out /tmp/tls.crt \
  -subj "/CN=httpbin.example.com/O=example"

# Create TLS secret in istio-system (for gateway)
kubectl create secret tls httpbin-credential \
  --key=/tmp/tls.key --cert=/tmp/tls.crt \
  -n istio-system
๐ŸŽฏ Task
  • Task 1: Create a Gateway with TLS mode SIMPLE using the certificate
  • Task 2: Create a VirtualService routing to httpbin
  • Task 3: Test HTTPS access to the service
โœ… Solution
bash
# Task 1: Create Gateway with TLS
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: tls-test
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: httpbin-credential
    hosts:
    - "httpbin.example.com"
EOF

# Task 2: Create VirtualService
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-vs
  namespace: tls-test
spec:
  hosts:
  - "httpbin.example.com"
  gateways:
  - httpbin-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

# Get ingress IP
INGRESS_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
[ -z "$INGRESS_IP" ] && INGRESS_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
SECURE_PORT=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

# Task 3: Test HTTPS access
echo "Testing HTTPS access:"
curl -sk --resolve "httpbin.example.com:${SECURE_PORT:-443}:${INGRESS_IP}" \
  "https://httpbin.example.com:${SECURE_PORT:-443}/get" | head -10

๐Ÿ’ก Exam Tip

TLS modes:
โ€ข SIMPLE: Standard TLS (server cert only)
โ€ข MUTUAL: mTLS (client + server certs)
โ€ข PASSTHROUGH: SNI-based routing, no termination
Secret must be in istio-system namespace for ingress gateway.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Gateway with TLS mode SIMPLE created
  • HTTPS request returns httpbin response
๐Ÿงน Cleanup
bash
kubectl delete namespace tls-test
kubectl delete secret httpbin-credential -n istio-system
rm -f /tmp/tls.key /tmp/tls.crt
๐Ÿ“š References
33
Verify Workload Certificate Information
Inspect the mTLS certificates used by workloads to verify their identity and expiration. Use istioctl to examine the certificate chain.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace cert-test
kubectl label namespace cert-test istio-injection=enabled

kubectl apply -n cert-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n cert-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Use istioctl proxy-config secret to view certificate information
  • Task 2: Verify the SPIFFE identity of the workload
  • Task 3: Check certificate validity
โœ… Solution
bash
# Get pod name
POD=$(kubectl get pod -n cert-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}')

# Task 1: View certificate secrets
echo "=== Certificate Secrets ==="
istioctl proxy-config secret $POD -n cert-test

# Task 2 & 3: View detailed certificate info
echo ""
echo "=== Certificate Details ==="
istioctl proxy-config secret $POD -n cert-test -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 -d | openssl x509 -text -noout | head -30

# Alternative: Use istioctl x describe
echo ""
echo "=== Workload Description ==="
istioctl x describe pod $POD -n cert-test | grep -A5 "Certificate"

๐Ÿ’ก Exam Tip

SPIFFE ID format: spiffe://cluster.local/ns/{namespace}/sa/{service-account}
Certificates are automatically rotated by Istio (default 24h validity).
Use istioctl proxy-config secret to verify mTLS setup.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • proxy-config shows ROOTCA and default certificate
  • Certificate shows SPIFFE URI in SAN
  • Certificate validity is shown
๐Ÿงน Cleanup
bash
kubectl delete namespace cert-test
๐Ÿ“š References
34
Allow Traffic from Specific Namespace
Configure an AuthorizationPolicy to allow traffic only from workloads in a specific namespace. Services in "allowed" namespace can access backend, but "blocked" cannot.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespaces
kubectl create namespace backend
kubectl create namespace allowed
kubectl create namespace blocked

kubectl label namespace backend istio-injection=enabled
kubectl label namespace allowed istio-injection=enabled
kubectl label namespace blocked istio-injection=enabled

# Deploy backend
kubectl apply -n backend -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

# Deploy clients
kubectl apply -n allowed -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl apply -n blocked -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n backend --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n allowed --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n blocked --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create AuthorizationPolicy allowing only traffic from namespace "allowed"
  • Task 2: Verify "allowed" can access backend
  • Task 3: Verify "blocked" cannot access backend
โœ… Solution
bash
# Task 1: Create AuthorizationPolicy
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: allow-from-namespace
  namespace: backend
spec:
  selector:
    matchLabels:
      app: httpbin
  action: ALLOW
  rules:
  - from:
    - source:
        namespaces: ["allowed"]
EOF

# Task 2: Test from allowed namespace (should succeed)
echo "From 'allowed' namespace:"
kubectl exec -n allowed deploy/sleep -- curl -s httpbin.backend:8000/get -o /dev/null -w "%{http_code}\n"

# Task 3: Test from blocked namespace (should fail)
echo "From 'blocked' namespace:"
kubectl exec -n blocked deploy/sleep -- curl -s httpbin.backend:8000/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

source.namespaces: Match by source namespace
source.principals: Match by service account
source.ipBlocks: Match by source IP CIDR

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Request from "allowed": 200
  • Request from "blocked": 403
๐Ÿงน Cleanup
bash
kubectl delete namespace backend allowed blocked
๐Ÿ“š References
35
Create Default Deny-All Authorization Policy
Implement zero-trust security by creating a default deny-all policy for a namespace. No traffic is allowed unless explicitly permitted.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace zero-trust
kubectl label namespace zero-trust istio-injection=enabled

kubectl apply -n zero-trust -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n zero-trust -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n zero-trust --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n zero-trust --timeout=120s

# Verify traffic works before policy
echo "Before deny-all policy:"
kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"
๐ŸŽฏ Task
  • Task 1: Create a deny-all AuthorizationPolicy for the namespace
  • Task 2: Verify all traffic is blocked
  • Task 3: Create an ALLOW policy to restore specific access
โœ… Solution
bash
# Task 1: Create deny-all policy (empty spec)
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: deny-all
  namespace: zero-trust
spec: {}
EOF

# Task 2: Verify traffic is blocked
echo "After deny-all policy:"
kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

# Task 3: Create ALLOW policy to restore specific access
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: allow-sleep-to-httpbin
  namespace: zero-trust
spec:
  selector:
    matchLabels:
      app: httpbin
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/zero-trust/sa/sleep"]
EOF

echo "After ALLOW policy for sleep:"
kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

๐Ÿ’ก Exam Tip

Deny-all: AuthorizationPolicy with empty spec {}
This blocks ALL traffic to ALL workloads in the namespace.
Then add specific ALLOW policies for permitted traffic.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Before policy: 200
  • After deny-all: 403
  • After ALLOW policy: 200
๐Ÿงน Cleanup
bash
kubectl delete namespace zero-trust
๐Ÿ“š References
36
Configure DestinationRule TLS Settings
Configure client-side TLS settings for outbound traffic. Use DestinationRule to specify ISTIO_MUTUAL mode for traffic to a specific host.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace dr-tls
kubectl label namespace dr-tls istio-injection=enabled

kubectl apply -n dr-tls -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n dr-tls -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n dr-tls --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n dr-tls --timeout=120s
๐ŸŽฏ Task
  • Task 1: Create a DestinationRule with ISTIO_MUTUAL TLS mode
  • Task 2: Verify TLS settings are applied
โœ… Solution
bash
# Task 1: Create DestinationRule with TLS settings
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-tls
  namespace: dr-tls
spec:
  host: httpbin
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

# Task 2: Verify configuration
kubectl get destinationrule httpbin-tls -n dr-tls -o yaml | grep -A3 tls

# Test connectivity
echo "Testing with ISTIO_MUTUAL TLS:"
kubectl exec -n dr-tls deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"

# Check proxy config
POD=$(kubectl get pod -n dr-tls -l app=sleep -o jsonpath='{.items[0].metadata.name}')
istioctl proxy-config cluster $POD -n dr-tls --fqdn httpbin.dr-tls.svc.cluster.local -o json | jq '.[0].transportSocket' | head -10

๐Ÿ’ก Exam Tip

DestinationRule TLS modes (client-side):
โ€ข DISABLE: No TLS
โ€ข SIMPLE: TLS (no client cert)
โ€ข MUTUAL: mTLS with specified certs
โ€ข ISTIO_MUTUAL: mTLS with Istio-managed certs

PeerAuthentication = server-side | DestinationRule TLS = client-side

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • DestinationRule shows tls.mode: ISTIO_MUTUAL
  • Request succeeds with 200
๐Ÿงน Cleanup
bash
kubectl delete namespace dr-tls
๐Ÿ“š References
๐Ÿ”
Domain 3 Complete: Securing Workloads
Q25Namespace STRICT mTLS
Q26Mesh-Wide STRICT mTLS
Q27Port-Level mTLS Exception
Q28DENY AuthorizationPolicy
Q29ALLOW by Service Account
Q30Authorization by Method/Path
Q31JWT Authentication
Q32Gateway TLS Configuration
Q33Verify Workload Certificates
Q34Allow Traffic from Namespace
Q35Default Deny-All Policy
Q36DestinationRule TLS Settings
Domain 4 โ€ข 20% of Exam โ€ข 10 Questions

Troubleshooting

istioctl analyze, proxy-status, proxy-config, debugging techniques.

37
Analyze Istio Configuration for Issues
Use istioctl analyze to detect misconfigurations, warnings, and errors in your Istio setup. Identify and fix any issues found.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace with a misconfiguration
kubectl create namespace analyze-test
kubectl label namespace analyze-test istio-injection=enabled

kubectl apply -n analyze-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n analyze-test --timeout=120s

# Create a VirtualService that references non-existent gateway (misconfiguration)
kubectl apply -n analyze-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: broken-vs
spec:
  hosts:
  - httpbin
  gateways:
  - non-existent-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF
๐ŸŽฏ Task
  • Task 1: Run istioctl analyze to detect configuration issues
  • Task 2: Analyze a specific namespace
  • Task 3: Analyze a local YAML file before applying
โœ… Solution
bash
# Task 1: Analyze all namespaces
echo "=== Analyzing all namespaces ==="
istioctl analyze --all-namespaces

# Task 2: Analyze specific namespace
echo ""
echo "=== Analyzing analyze-test namespace ==="
istioctl analyze -n analyze-test

# Task 3: Analyze local file before applying
cat <<EOF > /tmp/test-vs.yaml
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: test-vs
  namespace: analyze-test
spec:
  hosts:
  - httpbin
  gateways:
  - another-missing-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF

echo ""
echo "=== Analyzing local file ==="
istioctl analyze /tmp/test-vs.yaml -n analyze-test

# Show verbose output with warnings
echo ""
echo "=== Verbose analysis ==="
istioctl analyze -n analyze-test --output yaml

๐Ÿ’ก Exam Tip

istioctl analyze detects common issues like:
โ€ข Missing gateways referenced by VirtualServices
โ€ข Missing destination hosts
โ€ข Conflicting configurations
โ€ข Schema validation errors
Use --all-namespaces for cluster-wide analysis.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Analysis shows warning about missing gateway
  • Error code like IST0101 (ReferencedResourceNotFound)
  • Local file analysis catches issues before apply
๐Ÿงน Cleanup
bash
kubectl delete namespace analyze-test
rm -f /tmp/test-vs.yaml
38
Check Proxy Synchronization Status
Use istioctl proxy-status to verify that all sidecar proxies are synchronized with the control plane and receiving configuration updates.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace sync-test
kubectl label namespace sync-test istio-injection=enabled

kubectl apply -n sync-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n sync-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n sync-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n sync-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Check synchronization status of all proxies
  • Task 2: Check status of a specific proxy
  • Task 3: Identify any proxies that are out of sync
โœ… Solution
bash
# Task 1: Check all proxy status
echo "=== All Proxy Status ==="
istioctl proxy-status

# Task 2: Check specific pod's proxy status
POD=$(kubectl get pod -n sync-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}')
echo ""
echo "=== Specific Pod Status ==="
istioctl proxy-status $POD.sync-test

# Task 3: Check for any STALE or NOT SENT status
echo ""
echo "=== Checking for sync issues ==="
istioctl proxy-status | grep -E "STALE|NOT SENT" || echo "All proxies are SYNCED"

# Show xDS version details
echo ""
echo "=== xDS Version Details ==="
istioctl proxy-status | head -5

๐Ÿ’ก Exam Tip

Status meanings:
โ€ข SYNCED: Proxy has latest config from Istiod
โ€ข NOT SENT: Istiod hasn't sent config (might be new)
โ€ข STALE: Istiod sent config but proxy hasn't ACKed

Columns: CDS (clusters), LDS (listeners), EDS (endpoints), RDS (routes)

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • All proxies show SYNCED status
  • CDS, LDS, EDS, RDS columns all show SYNCED
  • Istiod version matches proxy version
๐Ÿงน Cleanup
bash
kubectl delete namespace sync-test
๐Ÿ“š References
39
Inspect Envoy Cluster Configuration
Use istioctl proxy-config to examine the cluster (upstream service) configuration of a sidecar proxy. Debug service discovery issues.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace cluster-test
kubectl label namespace cluster-test istio-injection=enabled

kubectl apply -n cluster-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n cluster-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n cluster-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n cluster-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: List all clusters (upstream services) known to the sleep proxy
  • Task 2: Filter clusters for the httpbin service
  • Task 3: View detailed cluster config in JSON format
โœ… Solution
bash
# Get sleep pod name
SLEEP_POD=$(kubectl get pod -n cluster-test -l app=sleep -o jsonpath='{.items[0].metadata.name}')

# Task 1: List all clusters
echo "=== All Clusters ==="
istioctl proxy-config clusters $SLEEP_POD -n cluster-test | head -20

# Task 2: Filter for httpbin
echo ""
echo "=== httpbin Clusters ==="
istioctl proxy-config clusters $SLEEP_POD -n cluster-test --fqdn httpbin.cluster-test.svc.cluster.local

# Task 3: Detailed JSON output
echo ""
echo "=== Detailed Cluster Config ==="
istioctl proxy-config clusters $SLEEP_POD -n cluster-test --fqdn httpbin.cluster-test.svc.cluster.local -o json | jq '.[0] | {name, type, edsClusterConfig, connectTimeout}'

# Check cluster with subsets (if DestinationRule exists)
echo ""
echo "=== Cluster Summary ==="
istioctl proxy-config clusters $SLEEP_POD -n cluster-test | grep -c "cluster-test" | xargs echo "Total clusters in namespace:"

๐Ÿ’ก Exam Tip

Cluster config shows:
โ€ข SERVICE FQDN: Full service name
โ€ข PORT: Service port
โ€ข SUBSET: DestinationRule subset name
โ€ข DESTINATION RULE: Applied DestinationRule
Use --fqdn to filter by service name.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • httpbin cluster appears in the list
  • Cluster type is EDS (Endpoint Discovery Service)
  • Connect timeout is configured
๐Ÿงน Cleanup
bash
kubectl delete namespace cluster-test
๐Ÿ“š References
40
Inspect Envoy Route Configuration
Use istioctl proxy-config routes to examine the routing rules configured in a sidecar proxy. Debug traffic routing issues.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace route-test
kubectl label namespace route-test istio-injection=enabled

kubectl apply -n route-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n route-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n route-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n route-test --timeout=120s

# Create a VirtualService with specific routing
kubectl apply -n route-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-routes
spec:
  hosts:
  - httpbin
  http:
  - match:
    - uri:
        prefix: /status
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000
EOF
๐ŸŽฏ Task
  • Task 1: List all routes configured in the sleep proxy
  • Task 2: Filter routes for httpbin service
  • Task 3: View route details including match conditions
โœ… Solution
bash
# Get sleep pod name
SLEEP_POD=$(kubectl get pod -n route-test -l app=sleep -o jsonpath='{.items[0].metadata.name}')

# Task 1: List all routes
echo "=== All Routes ==="
istioctl proxy-config routes $SLEEP_POD -n route-test | head -20

# Task 2: Filter for httpbin routes
echo ""
echo "=== httpbin Routes ==="
istioctl proxy-config routes $SLEEP_POD -n route-test --name 8000

# Task 3: View detailed route config
echo ""
echo "=== Detailed Route Config ==="
istioctl proxy-config routes $SLEEP_POD -n route-test --name 8000 -o json | jq '.[0].virtualHosts[] | select(.name | contains("httpbin"))' | head -40

# View all route names
echo ""
echo "=== Route Names ==="
istioctl proxy-config routes $SLEEP_POD -n route-test -o json | jq -r '.[].name' | sort -u

๐Ÿ’ก Exam Tip

Route config shows:
โ€ข NAME: Route config name (usually port number)
โ€ข DOMAINS: Hosts the route applies to
โ€ข MATCH: URI/header match conditions
โ€ข VIRTUAL SERVICE: Applied VirtualService
Route names like 8000 correspond to service ports.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Routes show httpbin as destination
  • VirtualService match conditions visible (/status prefix)
  • Route config name matches service port
๐Ÿงน Cleanup
bash
kubectl delete namespace route-test
๐Ÿ“š References
41
Inspect Envoy Listener Configuration
Use istioctl proxy-config listeners to examine the listeners (inbound/outbound ports) configured in a sidecar proxy.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace listener-test
kubectl label namespace listener-test istio-injection=enabled

kubectl apply -n listener-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n listener-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n listener-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n listener-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: List all listeners configured in the httpbin proxy
  • Task 2: Identify inbound vs outbound listeners
  • Task 3: View listener details for a specific port
โœ… Solution
bash
# Get httpbin pod name
HTTPBIN_POD=$(kubectl get pod -n listener-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}')

# Task 1: List all listeners
echo "=== All Listeners ==="
istioctl proxy-config listeners $HTTPBIN_POD -n listener-test

# Task 2: Identify inbound listeners (virtualInbound)
echo ""
echo "=== Inbound Listeners ==="
istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | grep -E "INBOUND|virtualInbound"

# Identify outbound listeners
echo ""
echo "=== Outbound Listeners ==="
istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | grep "OUTBOUND" | head -10

# Task 3: View specific listener details
echo ""
echo "=== Listener on port 8000 ==="
istioctl proxy-config listeners $HTTPBIN_POD -n listener-test --port 8000 -o json | jq '.[0] | {name, address, filterChains: [.filterChains[0].filters[0].name]}'

# Count listeners
echo ""
echo "=== Listener Summary ==="
echo "Total listeners: $(istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | wc -l)"

๐Ÿ’ก Exam Tip

Listener types:
โ€ข virtualInbound: Handles incoming traffic to the pod
โ€ข virtualOutbound: Handles outgoing traffic from the pod
โ€ข 0.0.0.0:15006: Inbound traffic interceptor
โ€ข 0.0.0.0:15001: Outbound traffic interceptor

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • virtualInbound listener on port 15006
  • virtualOutbound listener on port 15001
  • Listener for httpbin service port 8000
๐Ÿงน Cleanup
bash
kubectl delete namespace listener-test
๐Ÿ“š References
42
Inspect Envoy Endpoint Configuration
Use istioctl proxy-config endpoints to examine the endpoints (pod IPs) known to a sidecar proxy. Debug service discovery and load balancing issues.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace endpoint-test
kubectl label namespace endpoint-test istio-injection=enabled

# Deploy httpbin with multiple replicas
kubectl apply -n endpoint-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl scale deployment httpbin -n endpoint-test --replicas=3

kubectl apply -n endpoint-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n endpoint-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n endpoint-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: List endpoints known to the sleep proxy
  • Task 2: Filter endpoints for httpbin service
  • Task 3: Verify all httpbin pod IPs are listed as healthy
โœ… Solution
bash
# Get sleep pod name
SLEEP_POD=$(kubectl get pod -n endpoint-test -l app=sleep -o jsonpath='{.items[0].metadata.name}')

# Task 1: List all endpoints
echo "=== All Endpoints (first 20) ==="
istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test | head -20

# Task 2: Filter for httpbin endpoints
echo ""
echo "=== httpbin Endpoints ==="
istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test --cluster "outbound|8000||httpbin.endpoint-test.svc.cluster.local"

# Task 3: Verify healthy endpoints
echo ""
echo "=== Endpoint Health Status ==="
istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test --cluster "outbound|8000||httpbin.endpoint-test.svc.cluster.local" -o json | jq '.[].hostStatuses[] | {address: .address.socketAddress.address, health: .healthStatus.edsHealthStatus}'

# Compare with actual pod IPs
echo ""
echo "=== Actual httpbin Pod IPs ==="
kubectl get pods -n endpoint-test -l app=httpbin -o wide | awk '{print $6}'

๐Ÿ’ก Exam Tip

Endpoint status:
โ€ข HEALTHY: Endpoint is available
โ€ข UNHEALTHY: Failed health checks
โ€ข DRAINING: Being removed
Cluster name format: outbound|PORT||FQDN

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • 3 endpoints listed for httpbin (matching replica count)
  • All endpoints show HEALTHY status
  • Endpoint IPs match actual pod IPs
๐Ÿงน Cleanup
bash
kubectl delete namespace endpoint-test
๐Ÿ“š References
43
Debug Sidecar Injection Issues
Troubleshoot why sidecar injection is not working. Check namespace labels, pod annotations, and injection webhook configuration.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Create namespace WITHOUT injection label (simulating issue)
kubectl create namespace injection-debug

# Deploy app (will NOT get sidecar)
kubectl apply -n injection-debug -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n injection-debug --timeout=120s
๐ŸŽฏ Task
  • Task 1: Verify if sidecar was injected (check container count)
  • Task 2: Check namespace labels for injection
  • Task 3: Verify injection webhook is working
  • Task 4: Fix the issue and redeploy
โœ… Solution
bash
# Task 1: Check container count (should be 1, not 2)
echo "=== Container Count (1 = no sidecar, 2 = has sidecar) ==="
kubectl get pods -n injection-debug -o jsonpath='{range .items[*]}{.metadata.name}{" containers: "}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'

# Task 2: Check namespace labels
echo ""
echo "=== Namespace Labels ==="
kubectl get namespace injection-debug --show-labels

# Task 3: Check webhook configuration
echo ""
echo "=== Injection Webhook ==="
kubectl get mutatingwebhookconfigurations | grep istio

echo ""
echo "=== Webhook Details ==="
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o jsonpath='{.webhooks[0].namespaceSelector}' | jq .

# Task 4: Fix - Add injection label and redeploy
echo ""
echo "=== Fixing: Adding injection label ==="
kubectl label namespace injection-debug istio-injection=enabled

# Delete and recreate pod to get sidecar
kubectl delete pod -n injection-debug -l app=httpbin
kubectl wait --for=condition=ready pod -l app=httpbin -n injection-debug --timeout=120s

# Verify fix
echo ""
echo "=== After Fix: Container Count ==="
kubectl get pods -n injection-debug -o jsonpath='{range .items[*]}{.metadata.name}{" containers: "}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'

๐Ÿ’ก Exam Tip

Injection troubleshooting checklist:
1. Namespace label: istio-injection=enabled
2. Pod annotation: sidecar.istio.io/inject: "true"
3. Webhook exists: istio-sidecar-injector
4. Istiod is running
Pods must be recreated after adding labels!

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Before fix: 1 container (httpbin only)
  • After fix: 2 containers (httpbin + istio-proxy)
  • Namespace has istio-injection=enabled label
๐Ÿงน Cleanup
bash
kubectl delete namespace injection-debug
44
View and Analyze Envoy Access Logs
Enable and examine Envoy access logs to debug traffic issues. Understand log format and identify request failures.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Ensure access logging is enabled
istioctl install --set meshConfig.accessLogFile=/dev/stdout -y

kubectl create namespace logs-test
kubectl label namespace logs-test istio-injection=enabled

kubectl apply -n logs-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n logs-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n logs-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n logs-test --timeout=120s
๐ŸŽฏ Task
  • Task 1: Generate traffic and view access logs
  • Task 2: Identify key fields in the log output
  • Task 3: Find failed requests in the logs
โœ… Solution
bash
# Task 1: Generate traffic
echo "=== Generating traffic ==="
kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null
kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/status/500 -o /dev/null
kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/status/404 -o /dev/null

# View httpbin access logs (inbound)
echo ""
echo "=== httpbin Access Logs (last 10 lines) ==="
kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=10

# Task 2: Parse key fields from JSON logs
echo ""
echo "=== Parsed Log Fields ==="
kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=5 | head -1 | jq '{
  method: .method,
  path: .path,
  response_code: .response_code,
  response_flags: .response_flags,
  upstream_cluster: .upstream_cluster,
  duration: .duration
}' 2>/dev/null || echo "Logs may be in text format"

# Task 3: Find failed requests (non-2xx)
echo ""
echo "=== Failed Requests ==="
kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=20 | grep -E '"response_code":(4|5)[0-9]{2}' || echo "Check text format logs for status codes"

# View sleep outbound logs
echo ""
echo "=== sleep Outbound Logs ==="
kubectl logs -n logs-test deploy/sleep -c istio-proxy --tail=5

๐Ÿ’ก Exam Tip

Key access log fields:
โ€ข response_code: HTTP status code
โ€ข response_flags: Envoy-specific flags (UH=no healthy upstream, NR=no route)
โ€ข upstream_cluster: Destination service
โ€ข duration: Request time in ms
Logs appear in istio-proxy container.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Access logs visible in istio-proxy container
  • 200, 404, and 500 response codes visible
  • Request path and duration logged
๐Ÿงน Cleanup
bash
kubectl delete namespace logs-test
45
Debug Traffic Routing with istioctl
Use istioctl experimental describe to understand how traffic flows to a specific pod and identify any routing issues.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
kubectl create namespace describe-test
kubectl label namespace describe-test istio-injection=enabled

kubectl apply -n describe-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n describe-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml

kubectl wait --for=condition=ready pod -l app=httpbin -n describe-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n describe-test --timeout=120s

# Create some Istio resources
kubectl apply -n describe-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: httpbin-vs
spec:
  hosts:
  - httpbin
  http:
  - timeout: 10s
    route:
    - destination:
        host: httpbin
        port:
          number: 8000
---
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: httpbin-dr
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 100
EOF
๐ŸŽฏ Task
  • Task 1: Use istioctl x describe to analyze the httpbin pod
  • Task 2: Identify applied VirtualServices and DestinationRules
  • Task 3: Check mTLS status and policies
โœ… Solution
bash
# Get pod names
HTTPBIN_POD=$(kubectl get pod -n describe-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}')

# Task 1, 2, 3: Describe the pod
echo "=== Pod Description ==="
istioctl x describe pod $HTTPBIN_POD -n describe-test

# Describe the service
echo ""
echo "=== Service Description ==="
istioctl x describe service httpbin -n describe-test

# Check what's affecting the workload
echo ""
echo "=== Applied Policies ==="
kubectl get virtualservice,destinationrule,peerauthentication,authorizationpolicy -n describe-test

# Verify routing from sleep perspective
SLEEP_POD=$(kubectl get pod -n describe-test -l app=sleep -o jsonpath='{.items[0].metadata.name}')
echo ""
echo "=== Routing from Sleep to httpbin ==="
istioctl proxy-config routes $SLEEP_POD -n describe-test --name 8000 | grep httpbin

๐Ÿ’ก Exam Tip

istioctl x describe shows:
โ€ข Applied VirtualServices and DestinationRules
โ€ข mTLS mode and PeerAuthentication policies
โ€ข AuthorizationPolicies affecting the workload
โ€ข Service ports and endpoints
The x indicates experimental command.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • VirtualService httpbin-vs shown with 10s timeout
  • DestinationRule httpbin-dr shown
  • mTLS status displayed
๐Ÿงน Cleanup
bash
kubectl delete namespace describe-test
๐Ÿ“š References
46
Check Istiod Control Plane Logs
Examine Istiod logs to debug control plane issues, configuration distribution problems, and certificate issuance.
โ–ผ
๐Ÿ“‹ Prerequisites
bash
# Ensure Istio is installed and running
kubectl get pods -n istio-system -l app=istiod
๐ŸŽฏ Task
  • Task 1: View recent Istiod logs
  • Task 2: Filter logs for errors and warnings
  • Task 3: Check for xDS push events
  • Task 4: Verify certificate authority is working
โœ… Solution
bash
# Task 1: View recent logs
echo "=== Recent Istiod Logs ==="
kubectl logs -n istio-system deploy/istiod --tail=20

# Task 2: Filter for errors and warnings
echo ""
echo "=== Errors and Warnings ==="
kubectl logs -n istio-system deploy/istiod --tail=100 | grep -iE "error|warn" | tail -10 || echo "No recent errors or warnings"

# Task 3: Check xDS push events
echo ""
echo "=== xDS Push Events ==="
kubectl logs -n istio-system deploy/istiod --tail=100 | grep -i "push" | tail -5 || echo "No recent push events"

# Task 4: Check CA/certificate logs
echo ""
echo "=== Certificate Events ==="
kubectl logs -n istio-system deploy/istiod --tail=100 | grep -iE "cert|ca|sign" | tail -5 || echo "No recent cert events"

# Check Istiod health
echo ""
echo "=== Istiod Health ==="
kubectl get pods -n istio-system -l app=istiod -o wide

# Check control plane version
echo ""
echo "=== Control Plane Info ==="
istioctl version

๐Ÿ’ก Exam Tip

Key Istiod log patterns:
โ€ข Push: Configuration pushed to proxies
โ€ข cert: Certificate operations
โ€ข error/warn: Problems to investigate
โ€ข ads: Aggregated Discovery Service events
Use --since=5m to limit time range.

๐Ÿ“Š Expected Result

โœ“ Verification Checklist

  • Istiod pod is Running
  • Logs show normal operation (push events)
  • No persistent errors in logs
  • Version command shows control plane version
๐Ÿงน Cleanup
bash
# No cleanup needed
๐Ÿ”ง
Domain 4 Complete: Troubleshooting
Q37istioctl analyze
Q38proxy-status (Sync Check)
Q39proxy-config clusters
Q40proxy-config routes
Q41proxy-config listeners
Q42proxy-config endpoints
Q43Debug Sidecar Injection
Q44Envoy Access Logs
Q45istioctl x describe
Q46Istiod Control Plane Logs

๐ŸŽ‰ All Domains Complete!

You've completed all 46 practice questions. Review the domain breakdown below.

12
Installation โ€ข 20%
12
Traffic Mgmt โ€ข 35%
12
Security โ€ข 20%
10
Troubleshooting โ€ข 20%
๐Ÿš€ Good luck on your ICA exam!