kubectl
. Do not forget to look at the cheat sheet in the official documentation section Kubernetes! kubectl
has excellent built-in autocompletion for bash and zsh, which makes it much easier to work with commands, flags, and objects like namespaces and subnames. The documentation has ready-made instructions for its inclusion. And the GIF animation below shows how autocompletion works: # Bash source <(kubectl completion bash) # … .bashrc mkdir ~/.kube kubectl completion bash > ~/.kube/completion.bash.inc printf "\n# Kubectl shell completion\nsource '$HOME/.kube/completion.bash.inc'\n" >> $HOME/.bashrc source $HOME/.bashrc # — Zsh source <(kubectl completion zsh)
KUBECONFIG
context
) is used, indicating the parameters that kubectl
will use to search for a specific, target cluster. But to achieve the desired result with contexts can be difficult. To simplify your life, use the KUBECONFIG
environment KUBECONFIG
- it allows you to point to the configuration files that are used during the merge. More information about KUBECONFIG
can be found in the official documentation . $ kubectl config view --minify > cluster1-config
apiVersion: v1 clusters: - cluster: certificate-authority: cluster1_ca.crt server: https://cluster1 name: cluster1 contexts: - context: cluster: cluster1 user: cluster1 name: cluster1 current-context: cluster1 kind: Config preferences: {} users: - name: cluster1 user: client-certificate: cluster1_apiserver.crt client-key: cluster1_apiserver.key
$ cat cluster2-config
apiVersion: v1 clusters: - cluster: certificate-authority: cluster2_ca.crt server: https://cluster2 name: cluster2 contexts: - context: cluster: cluster2 user: cluster2 name: cluster2 current-context: cluster2 kind: Config preferences: {} users: - name: cluster2 user: client-certificate: cluster2_apiserver.crt client-key: cluster2_apiserver.key
KUBECONFIG
. The advantage of this merge will be the ability to dynamically switch between contexts. A context is a “map” (map) that includes the descriptions of the cluster and the user, as well as the name by which the configuration can be referenced for cluster authentication and interaction with it. The --kubeconfig
flag allows you to look at the context for each file: $ kubectl --kubeconfig=cluster1-config config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 $ kubectl --kubeconfig=cluster2-config config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster2 cluster2 cluster2
KUBECONFIG
shows both contexts. To save the current context, create a new empty file with the name cluster-merge
: $ export KUBECONFIG=cluster-merge:cluster-config:cluster2-config dcooley@lynx ~ $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 cluster2 cluster2 cluster2
KUBECONFIG
is loaded in a strict order. Therefore, the context that is selected corresponds to the one specified as current-context
in the first config. Changing the context to cluster2
shifts the sign of the current ( *
) to this context in the list, and the kubectl
begin to apply to this (second) context: $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 cluster2 cluster2 cluster2 $ kubectl config use-context cluster2 Switched to context "cluster2". $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cluster1 cluster1 cluster1 * cluster2 cluster2 cluster2 $ cat cluster-merge
apiVersion: v1 clusters: [] contexts: [] current-context: cluster2 kind: Config preferences: {} users: []
current-context
. You can use Kubernetes contexts and merge them in different ways. For example, you can create a context ( cluster1_kube-system
) that will define the namespace ( kube-system
) for all kubectl
executable commands: $ kubectl config set-context cluster1_kube-system --cluster=cluster1 --namespace=kube-system --user=cluster1 Context "cluster1_kube-system" set. $ cat cluster-merge
apiVersion: v1 clusters: [] contexts: - context: cluster: cluster1 namespace: kube-system user: cluster1 name: cluster1_kube-system current-context: cluster2 kind: Config preferences: {} users: []
$ kubectl config use-context cluster1_kube-system Switched to context "cluster1_kube-system". $ kubectl get pods NAME READY STATUS RESTARTS AGE default-http-backend-fwx3g 1/1 Running 0 28m kube-addon-manager-cluster 1/1 Running 0 28m kube-dns-268032401-snq3h 3/3 Running 0 28m kubernetes-dashboard-b0thj 1/1 Running 0 28m nginx-ingress-controller-b15xz 1/1 Running 0 28m
swagger.json
file: $ kubectl proxy $ curl -O 127.0.0.1:8001/swagger.json
http://localhost:8001/api/
and look at the paths available in Kubernetes API.swagger.json
is a JSON document, you can view it with jq
. The jq
utility is a lightweight JSON file handler that allows you to perform comparisons and other operations. Read more here .swagger.json
helps to understand the Kubernetes API. This is a complex API, the functions in which are divided into groups, which complicates its perception: $ cat swagger.json | jq '.paths | keys[]' "/api/" "/api/v1/" "/api/v1/configmaps" "/api/v1/endpoints" "/api/v1/events" "/api/v1/namespaces" "/api/v1/nodes" "/api/v1/persistentvolumeclaims" "/api/v1/persistentvolumes" "/api/v1/pods" "/api/v1/podtemplates" "/api/v1/replicationcontrollers" "/api/v1/resourcequotas" "/api/v1/secrets" "/api/v1/serviceaccounts" "/api/v1/services" "/apis/" "/apis/apps/" "/apis/apps/v1beta1/" "/apis/apps/v1beta1/statefulsets" "/apis/autoscaling/" "/apis/batch/" "/apis/certificates.k8s.io/" "/apis/extensions/" "/apis/extensions/v1beta1/" "/apis/extensions/v1beta1/daemonsets" "/apis/extensions/v1beta1/deployments" "/apis/extensions/v1beta1/horizontalpodautoscalers" "/apis/extensions/v1beta1/ingresses" "/apis/extensions/v1beta1/jobs" "/apis/extensions/v1beta1/networkpolicies" "/apis/extensions/v1beta1/replicasets" "/apis/extensions/v1beta1/thirdpartyresources" "/apis/policy/" "/apis/policy/v1beta1/poddisruptionbudgets" "/apis/rbac.authorization.k8s.io/" "/apis/storage.k8s.io/" "/logs/" "/version/"
$ kubectl api-versions apps/v1beta1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1beta1 autoscaling/v1 batch/v1 batch/v2alpha1 certificates.k8s.io/v1alpha1 coreos.com/v1 etcd.coreos.com/v1beta1 extensions/v1beta1 oidc.coreos.com/v1 policy/v1beta1 rbac.authorization.k8s.io/v1alpha1 storage.k8s.io/v1beta1 v1
kubectl explain
command helps to better understand what different API components do: $ kubectl explain You must specify the type of resource to explain. Valid resource types include: * all * certificatesigningrequests (aka 'csr') * clusters (valid only for federation apiservers) * clusterrolebindings * clusterroles * componentstatuses (aka 'cs') * configmaps (aka 'cm') * daemonsets (aka 'ds') * deployments (aka 'deploy') * endpoints (aka 'ep') * events (aka 'ev') * horizontalpodautoscalers (aka 'hpa') * ingresses (aka 'ing') * jobs * limitranges (aka 'limits') * namespaces (aka 'ns') * networkpolicies * nodes (aka 'no') * persistentvolumeclaims (aka 'pvc') * persistentvolumes (aka 'pv') * pods (aka 'po') * poddisruptionbudgets (aka 'pdb') * podsecuritypolicies (aka 'psp') * podtemplates * replicasets (aka 'rs') * replicationcontrollers (aka 'rc') * resourcequotas (aka 'quota') * rolebindings * roles * secrets * serviceaccounts (aka 'sa') * services (aka 'svc') * statefulsets * storageclasses * thirdpartyresources error: Required resource not specified. See 'kubectl explain -h' for help and examples.
kubectl explain deploy
. The explain
command works with different levels of nesting, which allows you to also refer to dependent objects: $ kubectl explain deploy.spec.template.spec.containers.livenessProbe.exec RESOURCE: exec <Object> DESCRIPTION: One and only one of the following should be specified. Exec specifies the action to take. ExecAction describes a "run in container" action. FIELDS: command <[]string> Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
kubectl
jsonpath
. For example, you can run kubectl get pods --all-namespaces -o json
to see all the output, from which we can then filter the necessary data for an example with sorting pods by time (see below). $ kubectl run shop --replicas=2 --image quay.io/coreos/example-app:v1.0 --port 80 --expose
jsonpath
. More information on it can be obtained from the official documentation . $ kubectl get pods --all-namespaces --sort-by='.metadata.creationTimestamp' -o jsonpath='{range .items[*]}{.metadata.name}, {.metadata.creationTimestamp}{"\n"}{end}'
your-namespace
) and your request for the presence of a label that will help you find the right pitches, and get the logs of these pitches. If under not the only one, logs will be obtained from all the pods in parallel: $ ns='<your-namespace>' label='<yourkey>=<yourvalue>' kubectl get pods -n $ns -l $label -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -I {} kubectl -n $ns logs {}
your-namespace
) and your request for the presence of a label that will help you find the necessary pitches, and connect to it by name (to the first of the found tabs). Replace the 8080
for the desired hearth port: $ ns='<your-namespace>' label='<yourkey>=<yourvalue>' kubectl -n $ns get pod -l $label -o jsonpath='{.items[1].metadata.name}' | xargs -I{} kubectl -n $ns port-forward {} 8080:80
kubectl
jq
and jq
output kubectl
allows you to make complex queries, such as filtering all resources by the time they are created. $ kubectl get pods --all-namespaces -o json | jq '.items[] | .spec.nodeName' -r | sort | uniq -c
kubectl explain deployment.spec.selector
. $ kubectl get nodes -l 'master' or kubectl get nodes -l '!master'
--show-labels
argument for any Kubernetes object: $ kubectl get nodes --all-namespaces --show-labels
$ kubectl get pods --all-namespaces -o json | jq '.items | map({podName: .metadata.name, nodeName: .spec.nodeName}) | group_by(.nodeName) | map({nodeName: .[0].nodeName, pods: map(.podName)})'
$ kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name} {.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'
Source: https://habr.com/ru/post/333956/
All Articles