46 comprehensive hands-on lab exercises covering all exam domains. Master Istio service mesh through practical scenarios.
This practice lab contains 46 hands-on questions designed to prepare you for the Istio Certified Associate (ICA) exam. Each question presents a realistic scenario you might encounter in the actual exam or in production environments.
Questions are organized by exam domains with accurate weight distribution. All commands have been verified for Istio 1.28.x and follow current best practices.
These exercises can be completed on any Kubernetes environment with Istio installed. Below are some recommended options:
Master Istio installation methods, configuration profiles, sidecar injection, mesh configuration, and upgrade strategies.
Istio should be pre-installed on the cluster. Verify you have access to istioctl and kubectl.
# Verify CLI tools are available which istioctl which kubectl # Verify cluster access kubectl cluster-info
istio-system namespaceistioctl proxy-status to verify all proxies are in SYNCED stateistioctl analyze to check for configuration issues in the mesh# Task 1: Check Istio version (client, control plane, data plane) istioctl version # Task 2: Verify control plane pods are running kubectl get pods -n istio-system # Check that istiod is ready kubectl get deployment istiod -n istio-system # Task 3: Check proxy sync status istioctl proxy-status # Task 4: Analyze mesh configuration for issues istioctl analyze --all-namespaces
istioctl proxy-status columns: SYNCED means proxy config is current. STALE means proxy hasn't received updates. NOT SENT means istiod hasn't pushed config yet.
istioctl analyze finds misconfigurations like missing destinations, conflicting policies, or deprecated settings.
istioctl version shows matching versions for client, control plane, and data planeistio-system are in Running state with all containers readyistioctl proxy-status shows all proxies as SYNCED for CDS, LDS, EDS, RDSistioctl analyze reports no errors (warnings are acceptable)This exercise is read-only verification. No resources were created, so no cleanup is required.
Create a test namespace with sample applications to generate traffic for log verification.
# Create namespace for testing kubectl create namespace logging-test # Enable sidecar injection kubectl label namespace logging-test istio-injection=enabled # Deploy httpbin (server) and sleep (client) applications kubectl apply -n logging-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n logging-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml # Wait for pods to be ready kubectl wait --for=condition=ready pod -l app=httpbin -n logging-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n logging-test --timeout=120s
/dev/stdout with JSON encoding# Task 1: Enable access logging via istioctl istioctl install --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.accessLogEncoding=JSON -y # Verify the configuration was applied kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep accessLog # Task 2: Restart pods to pick up new config kubectl rollout restart deployment/httpbin -n logging-test kubectl rollout restart deployment/sleep -n logging-test # Wait for pods to be ready again kubectl wait --for=condition=ready pod -l app=httpbin -n logging-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n logging-test --timeout=120s # Task 3: Generate traffic kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/ip kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/headers kubectl exec -n logging-test deploy/sleep -- curl -s httpbin:8000/status/200 # Task 4: View access logs (JSON format) kubectl logs -n logging-test deploy/httpbin -c istio-proxy --tail=10
Access logs are essential for debugging. Key fields in JSON logs:
โข response_code - HTTP status (200, 503, etc.)
โข upstream_cluster - Where traffic was routed
โข duration - Request time in milliseconds
โข response_flags - Envoy flags like UC (upstream connection failure), UF (upstream failure)
istio shows accessLogFile: /dev/stdout and accessLogEncoding: JSON2/2 ready after restartauthority, method, path, response_code, upstream_clusterRemove all resources created and optionally disable access logging:
# Delete the test namespace and all resources kubectl delete namespace logging-test # Verify namespace is deleted kubectl get namespace logging-test 2>/dev/null || echo "Namespace deleted successfully" # (Optional) Disable access logging - reinstall without accessLog settings # istioctl install --set profile=default -y
Verify Istio's sidecar injector webhook is available and the control plane is healthy.
# Verify sidecar injector webhook exists kubectl get mutatingwebhookconfiguration | grep istio # Verify istiod is running kubectl get pods -n istio-system -l app=istiod
ordersorders-api with 1 replica in the namespaceistioctl proxy-status# Task 1: Create namespace kubectl create namespace orders # Task 2: Enable sidecar injection kubectl label namespace orders istio-injection=enabled # Verify label was applied kubectl get namespace orders --show-labels # Task 3: Deploy orders-api application kubectl create deployment orders-api --image=nginx:1.24 -n orders # Wait for pod to be ready kubectl wait --for=condition=ready pod -l app=orders-api -n orders --timeout=120s # Task 4: Verify pod has 2 containers kubectl get pods -n orders # List container names in the pod kubectl get pod -l app=orders-api -n orders -o jsonpath='{.items[0].spec.containers[*].name}' echo "" # Verify istio-proxy container is present kubectl get pod -l app=orders-api -n orders -o jsonpath='{range .items[0].spec.containers[*]}{.name}{": "}{.image}{"\n"}{end}' # Task 5: Check proxy is synced with control plane istioctl proxy-status | grep orders
Two ways to enable injection:
โข istio-injection=enabled - Uses default Istio revision
โข istio.io/rev=<revision> - Uses specific revision (canary upgrades)
The sidecar injector adds: istio-init (init container for iptables) and istio-proxy (Envoy sidecar)
orders exists with label istio-injection=enabled2/2 in READY columnorders-api istio-proxyistioctl proxy-status shows the orders-api pod as SYNCEDRemove all resources created during this exercise:
# Delete the namespace (removes deployment, pods, and namespace) kubectl delete namespace orders # Verify cleanup kubectl get namespace orders 2>/dev/null || echo "Namespace deleted successfully" # Verify no pods remain kubectl get pods -n orders 2>/dev/null || echo "All pods cleaned up"
Create a namespace with sidecar injection enabled to test selective exclusion.
# Create namespace with injection enabled
kubectl create namespace monitoring
kubectl label namespace monitoring istio-injection=enabled
kubectl get namespace monitoring --show-labels
legacy-agent with annotation sidecar.istio.io/inject: "false"metrics-collector WITHOUT the annotation (should get sidecar)legacy-agent has only 1 container (no sidecar)metrics-collector has 2 containers (with sidecar)# Task 1: Create pod WITH injection disabled kubectl run legacy-agent --image=nginx:1.24 -n monitoring \ --overrides='{"metadata":{"annotations":{"sidecar.istio.io/inject":"false"}}}' # Task 2: Create pod WITHOUT annotation (will get sidecar) kubectl run metrics-collector --image=nginx:1.24 -n monitoring # Wait for pods kubectl wait --for=condition=ready pod/legacy-agent -n monitoring --timeout=60s kubectl wait --for=condition=ready pod/metrics-collector -n monitoring --timeout=60s # Task 3 & 4: Verify container counts kubectl get pods -n monitoring # Check legacy-agent containers (should be 1) kubectl get pod legacy-agent -n monitoring -o jsonpath='{.spec.containers[*].name}' echo "" # Check metrics-collector containers (should be 2) kubectl get pod metrics-collector -n monitoring -o jsonpath='{.spec.containers[*].name}' echo ""
The annotation sidecar.istio.io/inject: "false" overrides namespace-level injection. Use for legacy apps, jobs that need clean termination, or infrastructure pods.
legacy-agent shows 1/1 READY (no sidecar)metrics-collector shows 2/2 READY (has sidecar)legacy-agent onlymetrics-collector istio-proxykubectl delete namespace monitoring kubectl get namespace monitoring 2>/dev/null || echo "Namespace deleted"
Deploy a test application to verify outbound traffic behavior.
# Create test namespace kubectl create namespace egress-test kubectl label namespace egress-test istio-injection=enabled # Deploy sleep app kubectl apply -n egress-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=sleep -n egress-test --timeout=120s # Test current access (should work with default ALLOW_ANY) kubectl exec -n egress-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 | head -1
REGISTRY_ONLY outbound policy# Task 1: Check current policy kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep outboundTrafficPolicy || echo "Using default (ALLOW_ANY)" # Task 2: Set REGISTRY_ONLY policy istioctl install --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY -y # Verify setting kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep -A1 outboundTrafficPolicy # Task 3: Restart pod and test kubectl rollout restart deployment/sleep -n egress-test kubectl wait --for=condition=ready pod -l app=sleep -n egress-test --timeout=120s # Test external access (should fail) kubectl exec -n egress-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 2>&1 | head -3 || echo "Blocked as expected"
ALLOW_ANY: Access any external service (default)
REGISTRY_ONLY: Only registered services (ServiceEntry) accessible
outboundTrafficPolicy: mode: REGISTRY_ONLYkubectl delete namespace egress-test istioctl install --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY -y
# Create namespace WITHOUT injection label
kubectl create namespace manual-inject
kubectl get namespace manual-inject --show-labels
istioctl kube-inject to inject the sidecar# Task 1: Create deployment manifest cat <<EOF > nginx-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-manual namespace: manual-inject spec: replicas: 1 selector: matchLabels: app: nginx-manual template: metadata: labels: app: nginx-manual spec: containers: - name: nginx image: nginx:1.24 ports: - containerPort: 80 EOF # Task 2: Inject sidecar istioctl kube-inject -f nginx-deploy.yaml > nginx-deploy-injected.yaml # Task 3: Apply injected manifest kubectl apply -f nginx-deploy-injected.yaml kubectl wait --for=condition=ready pod -l app=nginx-manual -n manual-inject --timeout=120s # Task 4: Verify sidecar kubectl get pods -n manual-inject kubectl get pod -l app=nginx-manual -n manual-inject -o jsonpath='{.items[0].spec.containers[*].name}' echo "" # Verify proxy sync istioctl proxy-status | grep nginx-manual
One-liner alternative: kubectl apply -f <(istioctl kube-inject -f deploy.yaml)
istio-injection label2/2 READYnginx istio-proxykubectl delete namespace manual-inject rm -f nginx-deploy.yaml nginx-deploy-injected.yaml
kubectl create namespace resource-test kubectl label namespace resource-test istio-injection=enabled
# Task 1 & 2: Configure proxy resources istioctl install \ --set values.global.proxy.resources.requests.cpu=100m \ --set values.global.proxy.resources.requests.memory=128Mi \ --set values.global.proxy.resources.limits.cpu=500m \ --set values.global.proxy.resources.limits.memory=256Mi \ -y # Task 3: Deploy test pod kubectl run resource-check --image=nginx:1.24 -n resource-test kubectl wait --for=condition=ready pod/resource-check -n resource-test --timeout=120s # Verify resources kubectl get pod resource-check -n resource-test -o jsonpath='{.spec.containers[?(@.name=="istio-proxy")].resources}' | jq .
Per-pod override via annotations: sidecar.istio.io/proxyCPU, sidecar.istio.io/proxyMemory
kubectl delete namespace resource-test istioctl install --set profile=default -y
# Set REGISTRY_ONLY and deploy test client istioctl install --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY -y kubectl create namespace external-test kubectl label namespace external-test istio-injection=enabled kubectl apply -n external-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=sleep -n external-test --timeout=120s # Verify blocked kubectl exec -n external-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 5 2>&1 | head -2 || echo "Blocked"
httpbin-ext for httpbin.org (HTTPS/443)# Task 1: Create ServiceEntry kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: httpbin-ext namespace: external-test spec: hosts: - httpbin.org ports: - number: 443 name: https protocol: TLS resolution: DNS location: MESH_EXTERNAL EOF # Verify ServiceEntry kubectl get serviceentry -n external-test # Task 2: Test access kubectl exec -n external-test deploy/sleep -- curl -sI https://httpbin.org/get --max-time 10 | head -2
ServiceEntry key fields: hosts, ports, resolution: DNS, location: MESH_EXTERNAL
httpbin-ext createdHTTP/2 200kubectl delete namespace external-test istioctl install --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY -y
kubectl create namespace proxy-wait kubectl label namespace proxy-wait istio-injection=enabled
holdApplicationUntilProxyStarts: true globally# Task 1: Enable globally istioctl install --set meshConfig.defaultConfig.holdApplicationUntilProxyStarts=true -y # Verify kubectl get configmap istio -n istio-system -o jsonpath='{.data.mesh}' | grep holdApplication # Task 2: Deploy test pod kubectl run wait-test --image=nginx:1.24 -n proxy-wait kubectl wait --for=condition=ready pod/wait-test -n proxy-wait --timeout=120s kubectl get pods -n proxy-wait # Task 3: Per-pod annotation alternative cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: wait-annotation namespace: proxy-wait annotations: proxy.istio.io/config: '{"holdApplicationUntilProxyStarts": true}' spec: containers: - name: app image: nginx:1.24 EOF
Global: meshConfig.defaultConfig.holdApplicationUntilProxyStarts
Per-pod: proxy.istio.io/config annotation
holdApplicationUntilProxyStarts: true2/2 READYkubectl delete namespace proxy-wait istioctl install --set profile=default -y
# Create frontend and backend namespaces kubectl create namespace frontend kubectl label namespace frontend istio-injection=enabled kubectl create namespace backend kubectl label namespace backend istio-injection=enabled # Deploy backend service kubectl create deployment backend-api --image=nginx:1.24 -n backend kubectl expose deployment backend-api --port=80 -n backend kubectl wait --for=condition=ready pod -l app=backend-api -n backend --timeout=120s # Deploy frontend client kubectl apply -n frontend -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=sleep -n frontend --timeout=120s
backend and istio-system only# Task 1: Check current clusters SLEEP_POD=$(kubectl get pod -l app=sleep -n frontend -o jsonpath='{.items[0].metadata.name}') istioctl proxy-config clusters $SLEEP_POD -n frontend | head -15 # Task 2: Create Sidecar resource kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: Sidecar metadata: name: frontend-sidecar namespace: frontend spec: egress: - hosts: - "./*" - "backend/*" - "istio-system/*" EOF sleep 5 # Task 3: Verify reduced scope istioctl proxy-config clusters $SLEEP_POD -n frontend | head -15 # Test connectivity still works kubectl exec -n frontend deploy/sleep -- curl -s backend-api.backend:80 | head -3
Sidecar egress hosts: ./* (same ns), namespace/* (specific ns). Always include istio-system/*.
kubectl delete namespace frontend backend
kubectl cluster-info kubectl get pods -n istio-system
demo profile# Task 1: Install demo profile istioctl install --set profile=demo -y # Task 2: Verify components kubectl get pods -n istio-system kubectl get deployments -n istio-system # Verify specific components kubectl get deployment istiod -n istio-system kubectl get deployment istio-ingressgateway -n istio-system kubectl get deployment istio-egressgateway -n istio-system istioctl version
demo: istiod + ingress + egress + high trace sampling
default: istiod + ingress only (production)
istioctl install --set profile=default -y
# Verify Istio is installed
kubectl get pods -n istio-system
istioctl version
istioctl uninstall --purge# Task 1: Uninstall Istio istioctl uninstall --purge -y # Task 2: Delete namespace kubectl delete namespace istio-system # Task 3: Remove CRDs kubectl get crd | grep istio.io | awk '{print $1}' | xargs kubectl delete crd # Task 4: Verify removal kubectl get namespace istio-system 2>/dev/null || echo "Namespace: REMOVED" kubectl get crd | grep istio || echo "CRDs: REMOVED" kubectl get mutatingwebhookconfiguration | grep istio || echo "Webhooks: REMOVED"
Before production uninstall: remove injection labels, restart workloads to remove sidecars, backup configurations.
Re-install Istio for subsequent exercises:
istioctl install --set profile=default -y kubectl get pods -n istio-system
Gateway, VirtualService, DestinationRule, traffic shifting, fault injection, and more.
Deploy a sample application to expose via the ingress gateway.
# Create namespace kubectl create namespace webapp kubectl label namespace webapp istio-injection=enabled # Deploy httpbin as sample app kubectl apply -n webapp -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n webapp --timeout=120s # Get ingress gateway IP kubectl get svc istio-ingressgateway -n istio-system
webapp-gateway listening on port 80 for host webapp.example.comwebapp-vs that routes traffic from the gateway to the httpbin service# Task 1: Create Gateway kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: webapp-gateway namespace: webapp spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "webapp.example.com" EOF # Task 2: Create VirtualService kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: webapp-vs namespace: webapp spec: hosts: - "webapp.example.com" gateways: - webapp-gateway http: - route: - destination: host: httpbin port: number: 8000 EOF # Verify resources kubectl get gateway,virtualservice -n webapp # Task 3: Get ingress IP and test INGRESS_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}') # If no external IP (e.g., minikube), use NodePort: [ -z "$INGRESS_IP" ] && INGRESS_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}') INGRESS_PORT=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') # Test with Host header curl -s -H "Host: webapp.example.com" "http://${INGRESS_IP}:${INGRESS_PORT:-80}/headers" | head -20
Gateway: Configures the load balancer (ports, hosts, TLS)
VirtualService: Defines routing rules (must reference the gateway)
The selector: istio: ingressgateway binds to the default ingress gateway.
webapp-gateway createdwebapp-vs createdkubectl delete namespace webapp
Deploy two versions of a service with version labels.
# Create namespace kubectl create namespace canary-test kubectl label namespace canary-test istio-injection=enabled # Deploy v1 kubectl apply -n canary-test -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: myapp-v1 spec: replicas: 1 selector: matchLabels: app: myapp version: v1 template: metadata: labels: app: myapp version: v1 spec: containers: - name: myapp image: hashicorp/http-echo args: ["-text=v1"] ports: - containerPort: 5678 EOF # Deploy v2 kubectl apply -n canary-test -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: myapp-v2 spec: replicas: 1 selector: matchLabels: app: myapp version: v2 template: metadata: labels: app: myapp version: v2 spec: containers: - name: myapp image: hashicorp/http-echo args: ["-text=v2"] ports: - containerPort: 5678 EOF # Create service (selects both versions) kubectl apply -n canary-test -f - <<EOF apiVersion: v1 kind: Service metadata: name: myapp spec: selector: app: myapp ports: - port: 80 targetPort: 5678 EOF # Deploy test client kubectl apply -n canary-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=sleep -n canary-test --timeout=120s
# Task 1: Create DestinationRule with subsets kubectl apply -n canary-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: myapp-dr spec: host: myapp subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF # Task 2: Create VirtualService with 90/10 split kubectl apply -n canary-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: myapp-vs spec: hosts: - myapp http: - route: - destination: host: myapp subset: v1 weight: 90 - destination: host: myapp subset: v2 weight: 10 EOF # Verify resources kubectl get destinationrule,virtualservice -n canary-test # Task 3: Test traffic distribution (run 20 requests) echo "Testing traffic split (20 requests)..." for i in {1..20}; do kubectl exec -n canary-test deploy/sleep -- curl -s myapp:80 done | sort | uniq -c
DestinationRule subsets: Define named groups of pods using labels
VirtualService weight: Percentage of traffic (must sum to 100)
Traffic split requires BOTH DestinationRule (subsets) AND VirtualService (weights).
kubectl delete namespace canary-test
# Create namespace and deploy httpbin kubectl create namespace fault-test kubectl label namespace fault-test istio-injection=enabled kubectl apply -n fault-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n fault-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n fault-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n fault-test --timeout=120s # Test baseline response time echo "Baseline response time:" kubectl exec -n fault-test deploy/sleep -- time curl -s httpbin:8000/get -o /dev/null
# Task 1: Create VirtualService with fault delay kubectl apply -n fault-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-delay spec: hosts: - httpbin http: - fault: delay: percentage: value: 50 fixedDelay: 5s route: - destination: host: httpbin port: number: 8000 EOF # Verify kubectl get virtualservice -n fault-test # Task 2: Test multiple requests (observe ~50% with 5s delay) echo "Testing fault injection (5 requests)..." for i in {1..5}; do echo "Request $i:" kubectl exec -n fault-test deploy/sleep -- sh -c 'time curl -s httpbin:8000/get -o /dev/null 2>&1 | grep real' done
Fault delay simulates network latency or slow services.
fixedDelay: Exact delay duration
percentage.value: % of requests affected (0-100)
kubectl delete namespace fault-test
kubectl create namespace abort-test
kubectl label namespace abort-test istio-injection=enabled
kubectl apply -n abort-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n abort-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n abort-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n abort-test --timeout=120s
# Verify baseline works
kubectl exec -n abort-test deploy/sleep -- curl -sI httpbin:8000/status/200 | head -1
# Task 1: Create VirtualService with abort fault kubectl apply -n abort-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-abort spec: hosts: - httpbin http: - fault: abort: percentage: value: 100 httpStatus: 503 route: - destination: host: httpbin port: number: 8000 EOF # Task 2: Test (all requests should return 503) echo "Testing abort fault injection..." for i in {1..3}; do kubectl exec -n abort-test deploy/sleep -- curl -sI httpbin:8000/get | head -1 done
fault.abort returns an error without calling the service.
Common test codes: 400, 403, 404, 500, 502, 503
Use for testing circuit breakers and retry logic.
HTTP/1.1 503 Service Unavailablekubectl delete namespace abort-test
kubectl create namespace timeout-test
kubectl label namespace timeout-test istio-injection=enabled
kubectl apply -n timeout-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n timeout-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n timeout-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n timeout-test --timeout=120s
# Test slow endpoint (5s delay) - should work without timeout
echo "Testing 5s delay endpoint (no timeout configured):"
kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/delay/5 -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n"
# Task 1: Create VirtualService with 3s timeout kubectl apply -n timeout-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-timeout spec: hosts: - httpbin http: - timeout: 3s route: - destination: host: httpbin port: number: 8000 EOF # Task 2: Test slow endpoint (5s) - should timeout after 3s echo "Testing 5s delay with 3s timeout (should fail):" kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/delay/5 -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n" # Task 3: Test fast endpoint - should succeed echo "Testing fast endpoint (should succeed):" kubectl exec -n timeout-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "Status: %{http_code}, Time: %{time_total}s\n"
timeout applies to the entire request duration.
Timed-out requests return 504 Gateway Timeout.
Always set timeouts in production to prevent resource exhaustion.
504 after ~3 seconds200 quicklykubectl delete namespace timeout-test
kubectl create namespace retry-test kubectl label namespace retry-test istio-injection=enabled kubectl apply -n retry-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n retry-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n retry-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n retry-test --timeout=120s
# Task 1: Create VirtualService with retries kubectl apply -n retry-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-retry spec: hosts: - httpbin http: - retries: attempts: 3 perTryTimeout: 2s retryOn: 5xx,reset,connect-failure,retriable-4xx route: - destination: host: httpbin port: number: 8000 EOF # Verify configuration kubectl get virtualservice httpbin-retry -n retry-test -o yaml | grep -A5 retries # Task 2: Test with normal request kubectl exec -n retry-test deploy/sleep -- curl -s httpbin:8000/get | head -5 # Test with 500 error endpoint (Istio will retry) echo "Testing 500 endpoint (retries will occur but still fail):" kubectl exec -n retry-test deploy/sleep -- curl -sI httpbin:8000/status/500 | head -1
attempts: Max retry count (not including original request)
perTryTimeout: Timeout for each attempt
retryOn: Conditions to trigger retry (5xx, reset, connect-failure, etc.)
kubectl delete namespace retry-test
kubectl create namespace circuit-test kubectl label namespace circuit-test istio-injection=enabled kubectl apply -n circuit-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n circuit-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n circuit-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n circuit-test --timeout=120s
# Task 1: Create DestinationRule with circuit breaker kubectl apply -n circuit-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: httpbin-cb spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 http2MaxRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutive5xxErrors: 3 interval: 30s baseEjectionTime: 30s maxEjectionPercent: 100 EOF # Verify configuration kubectl get destinationrule httpbin-cb -n circuit-test -o yaml | grep -A10 connectionPool # Task 2: Test with concurrent requests (using fortio if available, or simple loop) echo "Testing circuit breaker with rapid requests..." for i in {1..10}; do kubectl exec -n circuit-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" & done wait
connectionPool: Limits connections and requests
outlierDetection: Ejects unhealthy hosts (true circuit breaker)
When limits exceeded: 503 with flag UO (upstream overflow)
503 when circuit breaker tripskubectl delete namespace circuit-test
kubectl create namespace mirror-test kubectl label namespace mirror-test istio-injection=enabled # Deploy v1 and v2 httpbin instances kubectl apply -n mirror-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml # Create v2 deployment kubectl apply -n mirror-test -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: httpbin-v2 spec: replicas: 1 selector: matchLabels: app: httpbin version: v2 template: metadata: labels: app: httpbin version: v2 spec: containers: - name: httpbin image: docker.io/kong/httpbin ports: - containerPort: 80 EOF # Label original deployment as v1 kubectl patch deployment httpbin -n mirror-test --type merge -p '{"spec":{"template":{"metadata":{"labels":{"version":"v1"}}}}}' kubectl apply -n mirror-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n mirror-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n mirror-test --timeout=120s
# Task 1: Create DestinationRule kubectl apply -n mirror-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: httpbin-dr spec: host: httpbin subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF # Task 2: Create VirtualService with mirroring kubectl apply -n mirror-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-mirror spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 mirror: host: httpbin subset: v2 mirrorPercentage: value: 100 EOF # Task 3: Generate traffic echo "Generating traffic..." for i in {1..5}; do kubectl exec -n mirror-test deploy/sleep -- curl -s httpbin:8000/headers -o /dev/null done # Check v2 logs for mirrored requests echo "Checking v2 logs for mirrored traffic:" kubectl logs -n mirror-test deploy/httpbin-v2 --tail=10
mirror: Destination for mirrored traffic
mirrorPercentage: % of traffic to mirror (default 100%)
Mirrored requests are fire-and-forget (responses ignored).
kubectl delete namespace mirror-test
x-user-type: premium to v2, all others to v1.kubectl create namespace header-test
kubectl label namespace header-test istio-injection=enabled
# Deploy two versions
kubectl apply -n header-test -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-v1
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: v1
template:
metadata:
labels:
app: myapp
version: v1
spec:
containers:
- name: app
image: hashicorp/http-echo
args: ["-text=v1-standard"]
ports:
- containerPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: v2
template:
metadata:
labels:
app: myapp
version: v2
spec:
containers:
- name: app
image: hashicorp/http-echo
args: ["-text=v2-premium"]
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 5678
EOF
kubectl apply -n header-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=myapp -n header-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n header-test --timeout=120s
x-user-type: premium โ v2, default โ v1# Task 1: Create DestinationRule kubectl apply -n header-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: myapp-dr spec: host: myapp subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF # Task 2: Create VirtualService with header match kubectl apply -n header-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: myapp-vs spec: hosts: - myapp http: - match: - headers: x-user-type: exact: premium route: - destination: host: myapp subset: v2 - route: - destination: host: myapp subset: v1 EOF # Task 3: Test routing echo "Request WITHOUT header (should get v1):" kubectl exec -n header-test deploy/sleep -- curl -s myapp:80 echo "" echo "Request WITH premium header (should get v2):" kubectl exec -n header-test deploy/sleep -- curl -s -H "x-user-type: premium" myapp:80
Header match types: exact, prefix, regex
Match rules are evaluated in order - first match wins.
Always have a default route at the end (no match condition).
v1-standardx-user-type: premium returns: v2-premiumkubectl delete namespace header-test
/api/v1/* to service-v1 and /api/v2/* to service-v2.kubectl create namespace uri-test
kubectl label namespace uri-test istio-injection=enabled
# Deploy httpbin as backend
kubectl apply -n uri-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n uri-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n uri-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n uri-test --timeout=120s
/api/v1/status becomes /status# Task 1 & 2: Create VirtualService with URI match and rewrite kubectl apply -n uri-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-uri spec: hosts: - httpbin http: - match: - uri: prefix: /api/v1/ rewrite: uri: / route: - destination: host: httpbin port: number: 8000 - match: - uri: prefix: /api/v2/ rewrite: uri: / route: - destination: host: httpbin port: number: 8000 - route: - destination: host: httpbin port: number: 8000 EOF # Task 3: Test URI routing echo "Testing /api/v1/get (rewrites to /get):" kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/api/v1/get | head -5 echo "" echo "Testing /api/v2/headers (rewrites to /headers):" kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/api/v2/headers | head -5 echo "" echo "Testing /get directly:" kubectl exec -n uri-test deploy/sleep -- curl -s httpbin:8000/get | head -5
URI match types: exact, prefix, regex
rewrite.uri: Replace matched path before sending to destination
Prefix /api/v1/ with rewrite / maps /api/v1/get โ /get
/api/v1/get returns httpbin /get response/api/v2/headers returns httpbin /headers responsekubectl delete namespace uri-test
kubectl create namespace lb-test kubectl label namespace lb-test istio-injection=enabled kubectl apply -n lb-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n lb-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n lb-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n lb-test --timeout=120s
LEAST_REQUEST load balancing# Task 1: Create DestinationRule with load balancing kubectl apply -n lb-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: httpbin-lb spec: host: httpbin trafficPolicy: loadBalancer: simple: LEAST_REQUEST EOF # Task 2: Verify configuration kubectl get destinationrule httpbin-lb -n lb-test -o yaml | grep -A3 loadBalancer # Test requests for i in {1..5}; do kubectl exec -n lb-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "Request $i: %{http_code}\n" done
Load balancing algorithms:
โข ROUND_ROBIN (default): Rotate through endpoints
โข LEAST_REQUEST: Send to endpoint with fewest active requests
โข RANDOM: Random endpoint selection
โข PASSTHROUGH: Direct connection (no load balancing)
simple: LEAST_REQUEST200kubectl delete namespace lb-test
kubectl create namespace sticky-test
kubectl label namespace sticky-test istio-injection=enabled
# Deploy multiple replicas
kubectl apply -n sticky-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl scale deployment httpbin -n sticky-test --replicas=3
kubectl apply -n sticky-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n sticky-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n sticky-test --timeout=120s
x-user-id header# Task 1: Create DestinationRule with consistent hash kubectl apply -n sticky-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: httpbin-sticky spec: host: httpbin trafficPolicy: loadBalancer: consistentHash: httpHeaderName: x-user-id EOF # Task 2: Test sticky sessions echo "Requests with x-user-id: user-123 (should hit same pod):" for i in {1..3}; do kubectl exec -n sticky-test deploy/sleep -- curl -s -H "x-user-id: user-123" httpbin:8000/headers | grep -i "pod\|host" done echo "" echo "Requests with x-user-id: user-456 (may hit different pod):" for i in {1..3}; do kubectl exec -n sticky-test deploy/sleep -- curl -s -H "x-user-id: user-456" httpbin:8000/headers | grep -i "pod\|host" done
Consistent hash options:
โข httpHeaderName: Hash based on header value
โข httpCookie: Hash based on cookie
โข useSourceIp: Hash based on client IP
โข httpQueryParameterName: Hash based on query param
kubectl delete namespace sticky-test
mTLS, PeerAuthentication, AuthorizationPolicy, JWT, Gateway TLS.
# Create namespace with sidecar injection kubectl create namespace payments kubectl label namespace payments istio-injection=enabled # Deploy test services kubectl apply -n payments -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n payments -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n payments --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n payments --timeout=120s # Create a non-mesh client (no sidecar) in default namespace kubectl run curl-no-mesh --image=curlimages/curl --command -- sleep 3600 kubectl wait --for=condition=ready pod/curl-no-mesh --timeout=60s
payments-strict that enforces STRICT mTLS for the payments namespace# Task 1: Create PeerAuthentication with STRICT mTLS kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: payments-strict namespace: payments spec: mtls: mode: STRICT EOF # Verify PeerAuthentication kubectl get peerauthentication -n payments # Task 2: Test from mesh client (should succeed) echo "Testing from mesh client (with sidecar):" kubectl exec -n payments deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" # Task 3: Test from non-mesh client (should fail) echo "Testing from non-mesh client (no sidecar):" kubectl exec curl-no-mesh -- curl -s httpbin.payments:8000/get --max-time 5 -o /dev/null -w "%{http_code}\n" 2>&1 || echo "Connection rejected (expected)"
mTLS modes:
โข PERMISSIVE (default): Accept both mTLS and plaintext
โข STRICT: Only accept mTLS connections
โข DISABLE: Only accept plaintext
โข UNSET: Inherit from parent scope
payments-strict created with mode: STRICT200kubectl delete namespace payments kubectl delete pod curl-no-mesh
# Create test namespaces kubectl create namespace app-a kubectl create namespace app-b kubectl label namespace app-a istio-injection=enabled kubectl label namespace app-b istio-injection=enabled # Deploy test services kubectl apply -n app-a -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl apply -n app-b -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=sleep -n app-a --timeout=120s kubectl wait --for=condition=ready pod -l app=httpbin -n app-b --timeout=120s
istio-system namespace# Task 1: Create mesh-wide PeerAuthentication kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: default namespace: istio-system spec: mtls: mode: STRICT EOF # Verify kubectl get peerauthentication -n istio-system # Task 2: Test cross-namespace communication echo "Testing cross-namespace communication:" kubectl exec -n app-a deploy/sleep -- curl -s httpbin.app-b:8000/get -o /dev/null -w "%{http_code}\n" # Task 3: Check mTLS status echo "Checking mTLS status:" istioctl x describe pod $(kubectl get pod -n app-b -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -n app-b | grep -i tls
Mesh-wide policy must be in istio-system namespace with name default.
Hierarchy: Workload > Namespace > Mesh-wide
More specific policies override broader ones.
default200kubectl delete peerauthentication default -n istio-system kubectl delete namespace app-a app-b
kubectl create namespace port-mtls kubectl label namespace port-mtls istio-injection=enabled kubectl apply -n port-mtls -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n port-mtls -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n port-mtls --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n port-mtls --timeout=120s
# Task 1: Create PeerAuthentication with port exception kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: httpbin-port-exception namespace: port-mtls spec: selector: matchLabels: app: httpbin mtls: mode: STRICT portLevelMtls: 8080: mode: DISABLE EOF # Task 2: Verify configuration kubectl get peerauthentication -n port-mtls -o yaml | grep -A5 portLevelMtls # Test main port (8000 - STRICT mTLS) echo "Testing port 8000 (STRICT mTLS):" kubectl exec -n port-mtls deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"
portLevelMtls allows different mTLS modes per port.
Use cases: health checks, metrics endpoints, legacy integrations.
The selector targets specific workloads.
kubectl delete namespace port-mtls
kubectl create namespace authz-deny kubectl label namespace authz-deny istio-injection=enabled # Deploy "database" service (using httpbin) kubectl apply -n authz-deny -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: database spec: replicas: 1 selector: matchLabels: app: database template: metadata: labels: app: database spec: containers: - name: database image: docker.io/kong/httpbin ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: database spec: selector: app: database ports: - port: 80 EOF kubectl apply -n authz-deny -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=database -n authz-deny --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n authz-deny --timeout=120s # Verify access works before policy echo "Before policy - should succeed:" kubectl exec -n authz-deny deploy/sleep -- curl -s database/get -o /dev/null -w "%{http_code}\n"
# Task 1: Create DENY policy kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: deny-all-database namespace: authz-deny spec: selector: matchLabels: app: database action: DENY rules: - {} EOF # Task 2: Verify access is denied echo "After DENY policy - should return 403:" kubectl exec -n authz-deny deploy/sleep -- curl -s database/get -o /dev/null -w "%{http_code}\n"
action: DENY with empty rules - {} matches ALL requests.
DENY policies are evaluated before ALLOW policies.
Blocked requests return 403 Forbidden with RBAC: access denied.
403 Forbiddenkubectl delete namespace authz-deny
kubectl create namespace authz-allow kubectl label namespace authz-allow istio-injection=enabled # Create service accounts kubectl create serviceaccount frontend -n authz-allow kubectl create serviceaccount other -n authz-allow # Deploy backend API kubectl apply -n authz-allow -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml # Deploy frontend client (with frontend SA) kubectl apply -n authz-allow -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: serviceAccountName: frontend containers: - name: sleep image: curlimages/curl command: ["/bin/sleep", "3600"] EOF # Deploy other client (with other SA) kubectl apply -n authz-allow -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: other-client spec: replicas: 1 selector: matchLabels: app: other-client template: metadata: labels: app: other-client spec: serviceAccountName: other containers: - name: sleep image: curlimages/curl command: ["/bin/sleep", "3600"] EOF kubectl wait --for=condition=ready pod -l app=httpbin -n authz-allow --timeout=120s kubectl wait --for=condition=ready pod -l app=frontend -n authz-allow --timeout=120s kubectl wait --for=condition=ready pod -l app=other-client -n authz-allow --timeout=120s
frontend service account to access httpbin# Task 1: Create ALLOW policy for frontend SA kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-frontend-only namespace: authz-allow spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/authz-allow/sa/frontend"] EOF # Task 2: Test from frontend (should succeed) echo "Testing from frontend SA:" kubectl exec -n authz-allow deploy/frontend -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" # Task 3: Test from other-client (should fail) echo "Testing from other SA:" kubectl exec -n authz-allow deploy/other-client -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"
Service account principal format: cluster.local/ns/{namespace}/sa/{service-account}
When ANY ALLOW policy exists, requests not matching any ALLOW rule are denied.
Use principals: ["*"] to allow any authenticated identity.
200403kubectl delete namespace authz-allow
kubectl create namespace authz-http kubectl label namespace authz-http istio-injection=enabled kubectl create serviceaccount admin -n authz-http kubectl create serviceaccount user -n authz-http # Deploy API server kubectl apply -n authz-http -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml # Deploy admin client kubectl apply -n authz-http -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: admin-client spec: replicas: 1 selector: matchLabels: app: admin-client template: metadata: labels: app: admin-client spec: serviceAccountName: admin containers: - name: sleep image: curlimages/curl command: ["/bin/sleep", "3600"] EOF # Deploy user client kubectl apply -n authz-http -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: user-client spec: replicas: 1 selector: matchLabels: app: user-client template: metadata: labels: app: user-client spec: serviceAccountName: user containers: - name: sleep image: curlimages/curl command: ["/bin/sleep", "3600"] EOF kubectl wait --for=condition=ready pod -l app=httpbin -n authz-http --timeout=120s kubectl wait --for=condition=ready pod -l app=admin-client -n authz-http --timeout=120s kubectl wait --for=condition=ready pod -l app=user-client -n authz-http --timeout=120s
# Task 1 & 2: Create combined AuthorizationPolicy kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: api-access namespace: authz-http spec: selector: matchLabels: app: httpbin action: ALLOW rules: - to: - operation: methods: ["GET"] paths: ["/get", "/headers", "/ip"] - from: - source: principals: ["cluster.local/ns/authz-http/sa/admin"] to: - operation: methods: ["POST"] paths: ["/post"] EOF # Task 3: Test the policies echo "User GET /get (should succeed):" kubectl exec -n authz-http deploy/user-client -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" echo "User POST /post (should fail):" kubectl exec -n authz-http deploy/user-client -- curl -s -X POST httpbin:8000/post -o /dev/null -w "%{http_code}\n" echo "Admin POST /post (should succeed):" kubectl exec -n authz-http deploy/admin-client -- curl -s -X POST httpbin:8000/post -o /dev/null -w "%{http_code}\n"
to.operation matches request attributes: methods, paths, hosts, ports
from.source matches caller attributes: principals, namespaces, ipBlocks
Multiple rules in same policy = OR logic. Conditions within rule = AND logic.
200403200kubectl delete namespace authz-http
kubectl create namespace jwt-test kubectl label namespace jwt-test istio-injection=enabled kubectl apply -n jwt-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n jwt-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n jwt-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n jwt-test --timeout=120s
testing@secure.istio.io# Task 1: Create RequestAuthentication kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: RequestAuthentication metadata: name: jwt-auth namespace: jwt-test spec: selector: matchLabels: app: httpbin jwtRules: - issuer: "testing@secure.istio.io" jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.22/security/tools/jwt/samples/jwks.json" EOF # Task 2: Create AuthorizationPolicy requiring JWT kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: jwt-test spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"] EOF # Get sample token TOKEN=$(curl -s https://raw.githubusercontent.com/istio/istio/release-1.22/security/tools/jwt/samples/demo.jwt) # Task 3: Test without JWT (should fail) echo "Request without JWT:" kubectl exec -n jwt-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" # Test with valid JWT (should succeed) echo "Request with valid JWT:" kubectl exec -n jwt-test deploy/sleep -- curl -s -H "Authorization: Bearer $TOKEN" httpbin:8000/get -o /dev/null -w "%{http_code}\n"
RequestAuthentication: Validates JWT format and signature
AuthorizationPolicy with requestPrincipals: Requires valid JWT
RequestAuthentication alone only rejects INVALID tokens, not missing ones.
Principal format: {issuer}/{subject}
403200kubectl delete namespace jwt-test
# Create namespace and deploy app kubectl create namespace tls-test kubectl label namespace tls-test istio-injection=enabled kubectl apply -n tls-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n tls-test --timeout=120s # Generate self-signed certificate openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /tmp/tls.key -out /tmp/tls.crt \ -subj "/CN=httpbin.example.com/O=example" # Create TLS secret in istio-system (for gateway) kubectl create secret tls httpbin-credential \ --key=/tmp/tls.key --cert=/tmp/tls.crt \ -n istio-system
# Task 1: Create Gateway with TLS kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: httpbin-gateway namespace: tls-test spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: httpbin-credential hosts: - "httpbin.example.com" EOF # Task 2: Create VirtualService kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin-vs namespace: tls-test spec: hosts: - "httpbin.example.com" gateways: - httpbin-gateway http: - route: - destination: host: httpbin port: number: 8000 EOF # Get ingress IP INGRESS_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}') [ -z "$INGRESS_IP" ] && INGRESS_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}') SECURE_PORT=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') # Task 3: Test HTTPS access echo "Testing HTTPS access:" curl -sk --resolve "httpbin.example.com:${SECURE_PORT:-443}:${INGRESS_IP}" \ "https://httpbin.example.com:${SECURE_PORT:-443}/get" | head -10
TLS modes:
โข SIMPLE: Standard TLS (server cert only)
โข MUTUAL: mTLS (client + server certs)
โข PASSTHROUGH: SNI-based routing, no termination
Secret must be in istio-system namespace for ingress gateway.
kubectl delete namespace tls-test kubectl delete secret httpbin-credential -n istio-system rm -f /tmp/tls.key /tmp/tls.crt
kubectl create namespace cert-test kubectl label namespace cert-test istio-injection=enabled kubectl apply -n cert-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n cert-test --timeout=120s
istioctl proxy-config secret to view certificate information# Get pod name POD=$(kubectl get pod -n cert-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}') # Task 1: View certificate secrets echo "=== Certificate Secrets ===" istioctl proxy-config secret $POD -n cert-test # Task 2 & 3: View detailed certificate info echo "" echo "=== Certificate Details ===" istioctl proxy-config secret $POD -n cert-test -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 -d | openssl x509 -text -noout | head -30 # Alternative: Use istioctl x describe echo "" echo "=== Workload Description ===" istioctl x describe pod $POD -n cert-test | grep -A5 "Certificate"
SPIFFE ID format: spiffe://cluster.local/ns/{namespace}/sa/{service-account}
Certificates are automatically rotated by Istio (default 24h validity).
Use istioctl proxy-config secret to verify mTLS setup.
kubectl delete namespace cert-test
# Create namespaces kubectl create namespace backend kubectl create namespace allowed kubectl create namespace blocked kubectl label namespace backend istio-injection=enabled kubectl label namespace allowed istio-injection=enabled kubectl label namespace blocked istio-injection=enabled # Deploy backend kubectl apply -n backend -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml # Deploy clients kubectl apply -n allowed -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl apply -n blocked -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n backend --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n allowed --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n blocked --timeout=120s
# Task 1: Create AuthorizationPolicy kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-from-namespace namespace: backend spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: namespaces: ["allowed"] EOF # Task 2: Test from allowed namespace (should succeed) echo "From 'allowed' namespace:" kubectl exec -n allowed deploy/sleep -- curl -s httpbin.backend:8000/get -o /dev/null -w "%{http_code}\n" # Task 3: Test from blocked namespace (should fail) echo "From 'blocked' namespace:" kubectl exec -n blocked deploy/sleep -- curl -s httpbin.backend:8000/get -o /dev/null -w "%{http_code}\n"
source.namespaces: Match by source namespace
source.principals: Match by service account
source.ipBlocks: Match by source IP CIDR
200403kubectl delete namespace backend allowed blocked
kubectl create namespace zero-trust
kubectl label namespace zero-trust istio-injection=enabled
kubectl apply -n zero-trust -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n zero-trust -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n zero-trust --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n zero-trust --timeout=120s
# Verify traffic works before policy
echo "Before deny-all policy:"
kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"
# Task 1: Create deny-all policy (empty spec) kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: deny-all namespace: zero-trust spec: {} EOF # Task 2: Verify traffic is blocked echo "After deny-all policy:" kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" # Task 3: Create ALLOW policy to restore specific access kubectl apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-sleep-to-httpbin namespace: zero-trust spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/zero-trust/sa/sleep"] EOF echo "After ALLOW policy for sleep:" kubectl exec -n zero-trust deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n"
Deny-all: AuthorizationPolicy with empty spec {}
This blocks ALL traffic to ALL workloads in the namespace.
Then add specific ALLOW policies for permitted traffic.
200403200kubectl delete namespace zero-trust
kubectl create namespace dr-tls kubectl label namespace dr-tls istio-injection=enabled kubectl apply -n dr-tls -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n dr-tls -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n dr-tls --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n dr-tls --timeout=120s
# Task 1: Create DestinationRule with TLS settings kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: httpbin-tls namespace: dr-tls spec: host: httpbin trafficPolicy: tls: mode: ISTIO_MUTUAL EOF # Task 2: Verify configuration kubectl get destinationrule httpbin-tls -n dr-tls -o yaml | grep -A3 tls # Test connectivity echo "Testing with ISTIO_MUTUAL TLS:" kubectl exec -n dr-tls deploy/sleep -- curl -s httpbin:8000/get -o /dev/null -w "%{http_code}\n" # Check proxy config POD=$(kubectl get pod -n dr-tls -l app=sleep -o jsonpath='{.items[0].metadata.name}') istioctl proxy-config cluster $POD -n dr-tls --fqdn httpbin.dr-tls.svc.cluster.local -o json | jq '.[0].transportSocket' | head -10
DestinationRule TLS modes (client-side):
โข DISABLE: No TLS
โข SIMPLE: TLS (no client cert)
โข MUTUAL: mTLS with specified certs
โข ISTIO_MUTUAL: mTLS with Istio-managed certs
PeerAuthentication = server-side | DestinationRule TLS = client-side
200kubectl delete namespace dr-tls
istioctl analyze, proxy-status, proxy-config, debugging techniques.
# Create namespace with a misconfiguration kubectl create namespace analyze-test kubectl label namespace analyze-test istio-injection=enabled kubectl apply -n analyze-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n analyze-test --timeout=120s # Create a VirtualService that references non-existent gateway (misconfiguration) kubectl apply -n analyze-test -f - <<EOF apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: broken-vs spec: hosts: - httpbin gateways: - non-existent-gateway http: - route: - destination: host: httpbin port: number: 8000 EOF
istioctl analyze to detect configuration issues# Task 1: Analyze all namespaces echo "=== Analyzing all namespaces ===" istioctl analyze --all-namespaces # Task 2: Analyze specific namespace echo "" echo "=== Analyzing analyze-test namespace ===" istioctl analyze -n analyze-test # Task 3: Analyze local file before applying cat <<EOF > /tmp/test-vs.yaml apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: test-vs namespace: analyze-test spec: hosts: - httpbin gateways: - another-missing-gateway http: - route: - destination: host: httpbin port: number: 8000 EOF echo "" echo "=== Analyzing local file ===" istioctl analyze /tmp/test-vs.yaml -n analyze-test # Show verbose output with warnings echo "" echo "=== Verbose analysis ===" istioctl analyze -n analyze-test --output yaml
istioctl analyze detects common issues like:
โข Missing gateways referenced by VirtualServices
โข Missing destination hosts
โข Conflicting configurations
โข Schema validation errors
Use --all-namespaces for cluster-wide analysis.
IST0101 (ReferencedResourceNotFound)kubectl delete namespace analyze-test rm -f /tmp/test-vs.yaml
kubectl create namespace sync-test kubectl label namespace sync-test istio-injection=enabled kubectl apply -n sync-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n sync-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n sync-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n sync-test --timeout=120s
# Task 1: Check all proxy status echo "=== All Proxy Status ===" istioctl proxy-status # Task 2: Check specific pod's proxy status POD=$(kubectl get pod -n sync-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}') echo "" echo "=== Specific Pod Status ===" istioctl proxy-status $POD.sync-test # Task 3: Check for any STALE or NOT SENT status echo "" echo "=== Checking for sync issues ===" istioctl proxy-status | grep -E "STALE|NOT SENT" || echo "All proxies are SYNCED" # Show xDS version details echo "" echo "=== xDS Version Details ===" istioctl proxy-status | head -5
Status meanings:
โข SYNCED: Proxy has latest config from Istiod
โข NOT SENT: Istiod hasn't sent config (might be new)
โข STALE: Istiod sent config but proxy hasn't ACKed
Columns: CDS (clusters), LDS (listeners), EDS (endpoints), RDS (routes)
SYNCED statuskubectl delete namespace sync-test
kubectl create namespace cluster-test kubectl label namespace cluster-test istio-injection=enabled kubectl apply -n cluster-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n cluster-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n cluster-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n cluster-test --timeout=120s
# Get sleep pod name SLEEP_POD=$(kubectl get pod -n cluster-test -l app=sleep -o jsonpath='{.items[0].metadata.name}') # Task 1: List all clusters echo "=== All Clusters ===" istioctl proxy-config clusters $SLEEP_POD -n cluster-test | head -20 # Task 2: Filter for httpbin echo "" echo "=== httpbin Clusters ===" istioctl proxy-config clusters $SLEEP_POD -n cluster-test --fqdn httpbin.cluster-test.svc.cluster.local # Task 3: Detailed JSON output echo "" echo "=== Detailed Cluster Config ===" istioctl proxy-config clusters $SLEEP_POD -n cluster-test --fqdn httpbin.cluster-test.svc.cluster.local -o json | jq '.[0] | {name, type, edsClusterConfig, connectTimeout}' # Check cluster with subsets (if DestinationRule exists) echo "" echo "=== Cluster Summary ===" istioctl proxy-config clusters $SLEEP_POD -n cluster-test | grep -c "cluster-test" | xargs echo "Total clusters in namespace:"
Cluster config shows:
โข SERVICE FQDN: Full service name
โข PORT: Service port
โข SUBSET: DestinationRule subset name
โข DESTINATION RULE: Applied DestinationRule
Use --fqdn to filter by service name.
kubectl delete namespace cluster-test
kubectl create namespace route-test
kubectl label namespace route-test istio-injection=enabled
kubectl apply -n route-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n route-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n route-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n route-test --timeout=120s
# Create a VirtualService with specific routing
kubectl apply -n route-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: httpbin-routes
spec:
hosts:
- httpbin
http:
- match:
- uri:
prefix: /status
route:
- destination:
host: httpbin
port:
number: 8000
- route:
- destination:
host: httpbin
port:
number: 8000
EOF
# Get sleep pod name SLEEP_POD=$(kubectl get pod -n route-test -l app=sleep -o jsonpath='{.items[0].metadata.name}') # Task 1: List all routes echo "=== All Routes ===" istioctl proxy-config routes $SLEEP_POD -n route-test | head -20 # Task 2: Filter for httpbin routes echo "" echo "=== httpbin Routes ===" istioctl proxy-config routes $SLEEP_POD -n route-test --name 8000 # Task 3: View detailed route config echo "" echo "=== Detailed Route Config ===" istioctl proxy-config routes $SLEEP_POD -n route-test --name 8000 -o json | jq '.[0].virtualHosts[] | select(.name | contains("httpbin"))' | head -40 # View all route names echo "" echo "=== Route Names ===" istioctl proxy-config routes $SLEEP_POD -n route-test -o json | jq -r '.[].name' | sort -u
Route config shows:
โข NAME: Route config name (usually port number)
โข DOMAINS: Hosts the route applies to
โข MATCH: URI/header match conditions
โข VIRTUAL SERVICE: Applied VirtualService
Route names like 8000 correspond to service ports.
kubectl delete namespace route-test
kubectl create namespace listener-test kubectl label namespace listener-test istio-injection=enabled kubectl apply -n listener-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl apply -n listener-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n listener-test --timeout=120s kubectl wait --for=condition=ready pod -l app=sleep -n listener-test --timeout=120s
# Get httpbin pod name HTTPBIN_POD=$(kubectl get pod -n listener-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}') # Task 1: List all listeners echo "=== All Listeners ===" istioctl proxy-config listeners $HTTPBIN_POD -n listener-test # Task 2: Identify inbound listeners (virtualInbound) echo "" echo "=== Inbound Listeners ===" istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | grep -E "INBOUND|virtualInbound" # Identify outbound listeners echo "" echo "=== Outbound Listeners ===" istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | grep "OUTBOUND" | head -10 # Task 3: View specific listener details echo "" echo "=== Listener on port 8000 ===" istioctl proxy-config listeners $HTTPBIN_POD -n listener-test --port 8000 -o json | jq '.[0] | {name, address, filterChains: [.filterChains[0].filters[0].name]}' # Count listeners echo "" echo "=== Listener Summary ===" echo "Total listeners: $(istioctl proxy-config listeners $HTTPBIN_POD -n listener-test | wc -l)"
Listener types:
โข virtualInbound: Handles incoming traffic to the pod
โข virtualOutbound: Handles outgoing traffic from the pod
โข 0.0.0.0:15006: Inbound traffic interceptor
โข 0.0.0.0:15001: Outbound traffic interceptor
kubectl delete namespace listener-test
kubectl create namespace endpoint-test
kubectl label namespace endpoint-test istio-injection=enabled
# Deploy httpbin with multiple replicas
kubectl apply -n endpoint-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl scale deployment httpbin -n endpoint-test --replicas=3
kubectl apply -n endpoint-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n endpoint-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n endpoint-test --timeout=120s
# Get sleep pod name SLEEP_POD=$(kubectl get pod -n endpoint-test -l app=sleep -o jsonpath='{.items[0].metadata.name}') # Task 1: List all endpoints echo "=== All Endpoints (first 20) ===" istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test | head -20 # Task 2: Filter for httpbin endpoints echo "" echo "=== httpbin Endpoints ===" istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test --cluster "outbound|8000||httpbin.endpoint-test.svc.cluster.local" # Task 3: Verify healthy endpoints echo "" echo "=== Endpoint Health Status ===" istioctl proxy-config endpoints $SLEEP_POD -n endpoint-test --cluster "outbound|8000||httpbin.endpoint-test.svc.cluster.local" -o json | jq '.[].hostStatuses[] | {address: .address.socketAddress.address, health: .healthStatus.edsHealthStatus}' # Compare with actual pod IPs echo "" echo "=== Actual httpbin Pod IPs ===" kubectl get pods -n endpoint-test -l app=httpbin -o wide | awk '{print $6}'
Endpoint status:
โข HEALTHY: Endpoint is available
โข UNHEALTHY: Failed health checks
โข DRAINING: Being removed
Cluster name format: outbound|PORT||FQDN
kubectl delete namespace endpoint-test
# Create namespace WITHOUT injection label (simulating issue) kubectl create namespace injection-debug # Deploy app (will NOT get sidecar) kubectl apply -n injection-debug -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml kubectl wait --for=condition=ready pod -l app=httpbin -n injection-debug --timeout=120s
# Task 1: Check container count (should be 1, not 2) echo "=== Container Count (1 = no sidecar, 2 = has sidecar) ===" kubectl get pods -n injection-debug -o jsonpath='{range .items[*]}{.metadata.name}{" containers: "}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}' # Task 2: Check namespace labels echo "" echo "=== Namespace Labels ===" kubectl get namespace injection-debug --show-labels # Task 3: Check webhook configuration echo "" echo "=== Injection Webhook ===" kubectl get mutatingwebhookconfigurations | grep istio echo "" echo "=== Webhook Details ===" kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o jsonpath='{.webhooks[0].namespaceSelector}' | jq . # Task 4: Fix - Add injection label and redeploy echo "" echo "=== Fixing: Adding injection label ===" kubectl label namespace injection-debug istio-injection=enabled # Delete and recreate pod to get sidecar kubectl delete pod -n injection-debug -l app=httpbin kubectl wait --for=condition=ready pod -l app=httpbin -n injection-debug --timeout=120s # Verify fix echo "" echo "=== After Fix: Container Count ===" kubectl get pods -n injection-debug -o jsonpath='{range .items[*]}{.metadata.name}{" containers: "}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'
Injection troubleshooting checklist:
1. Namespace label: istio-injection=enabled
2. Pod annotation: sidecar.istio.io/inject: "true"
3. Webhook exists: istio-sidecar-injector
4. Istiod is running
Pods must be recreated after adding labels!
istio-injection=enabled labelkubectl delete namespace injection-debug
# Ensure access logging is enabled
istioctl install --set meshConfig.accessLogFile=/dev/stdout -y
kubectl create namespace logs-test
kubectl label namespace logs-test istio-injection=enabled
kubectl apply -n logs-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n logs-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n logs-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n logs-test --timeout=120s
# Task 1: Generate traffic echo "=== Generating traffic ===" kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/get -o /dev/null kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/status/500 -o /dev/null kubectl exec -n logs-test deploy/sleep -- curl -s httpbin:8000/status/404 -o /dev/null # View httpbin access logs (inbound) echo "" echo "=== httpbin Access Logs (last 10 lines) ===" kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=10 # Task 2: Parse key fields from JSON logs echo "" echo "=== Parsed Log Fields ===" kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=5 | head -1 | jq '{ method: .method, path: .path, response_code: .response_code, response_flags: .response_flags, upstream_cluster: .upstream_cluster, duration: .duration }' 2>/dev/null || echo "Logs may be in text format" # Task 3: Find failed requests (non-2xx) echo "" echo "=== Failed Requests ===" kubectl logs -n logs-test deploy/httpbin -c istio-proxy --tail=20 | grep -E '"response_code":(4|5)[0-9]{2}' || echo "Check text format logs for status codes" # View sleep outbound logs echo "" echo "=== sleep Outbound Logs ===" kubectl logs -n logs-test deploy/sleep -c istio-proxy --tail=5
Key access log fields:
โข response_code: HTTP status code
โข response_flags: Envoy-specific flags (UH=no healthy upstream, NR=no route)
โข upstream_cluster: Destination service
โข duration: Request time in ms
Logs appear in istio-proxy container.
kubectl delete namespace logs-test
kubectl create namespace describe-test
kubectl label namespace describe-test istio-injection=enabled
kubectl apply -n describe-test -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
kubectl apply -n describe-test -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml
kubectl wait --for=condition=ready pod -l app=httpbin -n describe-test --timeout=120s
kubectl wait --for=condition=ready pod -l app=sleep -n describe-test --timeout=120s
# Create some Istio resources
kubectl apply -n describe-test -f - <<EOF
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: httpbin-vs
spec:
hosts:
- httpbin
http:
- timeout: 10s
route:
- destination:
host: httpbin
port:
number: 8000
---
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: httpbin-dr
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 100
EOF
istioctl x describe to analyze the httpbin pod# Get pod names HTTPBIN_POD=$(kubectl get pod -n describe-test -l app=httpbin -o jsonpath='{.items[0].metadata.name}') # Task 1, 2, 3: Describe the pod echo "=== Pod Description ===" istioctl x describe pod $HTTPBIN_POD -n describe-test # Describe the service echo "" echo "=== Service Description ===" istioctl x describe service httpbin -n describe-test # Check what's affecting the workload echo "" echo "=== Applied Policies ===" kubectl get virtualservice,destinationrule,peerauthentication,authorizationpolicy -n describe-test # Verify routing from sleep perspective SLEEP_POD=$(kubectl get pod -n describe-test -l app=sleep -o jsonpath='{.items[0].metadata.name}') echo "" echo "=== Routing from Sleep to httpbin ===" istioctl proxy-config routes $SLEEP_POD -n describe-test --name 8000 | grep httpbin
istioctl x describe shows:
โข Applied VirtualServices and DestinationRules
โข mTLS mode and PeerAuthentication policies
โข AuthorizationPolicies affecting the workload
โข Service ports and endpoints
The x indicates experimental command.
httpbin-vs shown with 10s timeouthttpbin-dr shownkubectl delete namespace describe-test
# Ensure Istio is installed and running
kubectl get pods -n istio-system -l app=istiod
# Task 1: View recent logs echo "=== Recent Istiod Logs ===" kubectl logs -n istio-system deploy/istiod --tail=20 # Task 2: Filter for errors and warnings echo "" echo "=== Errors and Warnings ===" kubectl logs -n istio-system deploy/istiod --tail=100 | grep -iE "error|warn" | tail -10 || echo "No recent errors or warnings" # Task 3: Check xDS push events echo "" echo "=== xDS Push Events ===" kubectl logs -n istio-system deploy/istiod --tail=100 | grep -i "push" | tail -5 || echo "No recent push events" # Task 4: Check CA/certificate logs echo "" echo "=== Certificate Events ===" kubectl logs -n istio-system deploy/istiod --tail=100 | grep -iE "cert|ca|sign" | tail -5 || echo "No recent cert events" # Check Istiod health echo "" echo "=== Istiod Health ===" kubectl get pods -n istio-system -l app=istiod -o wide # Check control plane version echo "" echo "=== Control Plane Info ===" istioctl version
Key Istiod log patterns:
โข Push: Configuration pushed to proxies
โข cert: Certificate operations
โข error/warn: Problems to investigate
โข ads: Aggregated Discovery Service events
Use --since=5m to limit time range.
# No cleanup needed
You've completed all 46 practice questions. Review the domain breakdown below.