This page was exported from Free Exams Dumps Materials [ http://exams.dumpsmaterials.com ] Export date:Sat Nov 23 11:26:01 2024 / +0000 GMT ___________________________________________________ Title: Use Real Linux Foundation Achieve the CKAD Dumps - 100% Exam Passing Guarantee [Q18-Q38] --------------------------------------------------- Use Real Linux Foundation Achieve the CKAD Dumps - 100% Exam Passing Guarantee Verified CKAD Q&As - Pass Guarantee CKAD Exam Dumps NO.18 Exhibit:Given a container that writes a log file in format A and a container that converts log files from format A to format B, create a deployment that runs both containers such that the log files from the first container are converted by the second container, emitting logs in format B.Task:* Create a deployment named deployment-xyz in the default namespace, that:* Includes a primarylfccncf/busybox:1 container, named logger-dev* includes a sidecar Ifccncf/fluentd:v0.12 container, named adapter-zen* Mounts a shared volume /tmp/log on both containers, which does not persist when the pod is deleted* Instructs the logger-devcontainer to run the commandwhich should output logs to /tmp/log/input.log in plain text format, with example values:* The adapter-zen sidecar container should read /tmp/log/input.log and output the data to /tmp/log/output.* in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at /opt/KDMC00102/fluentd-configma p.yaml , and mount that ConfigMap to /fluentd/etc in the adapter-zen sidecar container  Solution:  Solution: NO.19 ContextTask:1) First update the Deployment cka00017-deployment in the ckad00017 namespace:To run 2 replicas of the podAdd the following label on the pod:Role userUI2) Next, Create a NodePort Service named cherry in the ckad00017 nmespace exposing the ckad00017-deployment Deployment on TCP port 8888 Solution:NO.20 ContextContextYou have been tasked with scaling an existing deployment for availability, and creating a service to expose the deployment within your infrastructure.TaskStart with the deployment named kdsn00101-deployment which has already been deployed to the namespace kdsn00101 . Edit it to:* Add the func=webFrontEnd key/value label to the pod template metadata to identify the pod for the service definition* Have 4 replicasNext, create ana deploy in namespace kdsn00l01 a service that accomplishes the following:* Exposes the service on TCP port 8080* is mapped to me pods defined by the specification of kdsn00l01-deployment* Is of type NodePort* Has a name of cherry Solution:NO.21 Exhibit:ContextAs a Kubernetes application developer you will often find yourself needing to update a running application.TaskPlease complete the following:* Update the app deployment in the kdpd00202 namespace with a maxSurge of 5% and a maxUnavailable of 2%* Perform a rolling update of the web1 deployment, changing the Ifccncf/ngmx image version to 1.13* Roll back the app deployment to the previous version  Solution:  Solution: NO.22 Exhibit:ContextA user has reported an aopticauon is unteachable due to a failing livenessProbe .TaskPerform the following tasks:* Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:The output file has already been created* Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command* Fix the issue.  Solution:Create the Pod:kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the Container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 m  Solution:Create the Pod:kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the Container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 m NO.23 ContextTask:Create a Pod named nginx resources in the existing pod resources namespace.Specify a single container using nginx:stable image.Specify a resource request of 300m cpus and 1G1 of memory for the Pod’s container. Solution:NO.24 Exhibit:TaskYou are required to create a pod that requests a certain amount of CPU and memory, so it gets scheduled to-a node that has those resources available.* Create a pod named nginx-resources in the pod-resources namespace that requests a minimum of 200m CPU and 1Gi memory for its container* The pod should use the nginx image* The pod-resources namespace has already been created  Solution:  Solution: NO.25 ContextTask:Modify the existing Deployment named broker-deployment running in namespace quetzal so that its containers.1) Run with user ID 30000 and2) Privilege escalation is forbiddenThe broker-deployment is manifest file can be found at: Solution:NO.26 ContextAnytime a team needs to run a container on Kubernetes they will need to define a pod within which to run the container.TaskPlease complete the following:* Create a YAML formatted pod manifest/opt/KDPD00101/podl.yml to create a pod named app1 that runs a container named app1cont using image Ifccncf/arg-output with these command line arguments: -lines 56 -F* Create the pod with the kubect1 command using the YAML file created in the previous step* When the pod is running display summary data about the pod in JSON format using the kubect1 command and redirect the output to a file named /opt/KDPD00101/out1.json* All of the files you need to work with have been created, empty, for your convenience Solution:NO.27 ContextGiven a container that writes a log file in format A and a container that converts log files from format A to format B, create a deployment that runs both containers such that the log files from the first container are converted by the second container, emitting logs in format B.Task:* Create a deployment named deployment-xyz in the default namespace, that:* Includes a primarylfccncf/busybox:1 container, named logger-dev* includes a sidecar Ifccncf/fluentd:v0.12 container, named adapter-zen* Mounts a shared volume /tmp/log on both containers, which does not persist when the pod is deleted* Instructs the logger-devcontainer to run the commandwhich should output logs to /tmp/log/input.log in plain text format, with example values:* The adapter-zen sidecar container should read /tmp/log/input.log and output the data to /tmp/log/output.* in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at /opt/KDMC00102/fluentd-configma p.yaml , and mount that ConfigMap to /fluentd/etc in the adapter-zen sidecar container Solution:NO.28 ContextContextA project that you are working on has a requirement for persistent data to be available.TaskTo facilitate this, perform the following tasks:* Create a file on node sk8s-node-0 at /opt/KDSP00101/data/index.html with the content Acct=Finance* Create a PersistentVolume named task-pv-volume using hostPath and allocate 1Gi to it, specifying that the volume is at /opt/KDSP00101/data on the cluster’s node. The configuration should specify the access mode of ReadWriteOnce . It should define the StorageClass name exam for the PersistentVolume , which will be used to bind PersistentVolumeClaim requests to this PersistenetVolume.* Create a PefsissentVolumeClaim named task-pv-claim that requests a volume of at least 100Mi and specifies an access mode of ReadWriteOnce* Create a pod that uses the PersistentVolmeClaim as a volume with a label app: my-storage-app mounting the resulting volume to a mountPath /usr/share/nginx/html inside the pod Solution:NO.29 ContextTask:1- Update the Propertunel scaling configuration of the Deployment web1 in the ckad00015 namespace setting maxSurge to 2 and maxUnavailable to 592- Update the web1 Deployment to use version tag 1.13.7 for the Ifconf/nginx container image.3- Perform a rollback of the web1 Deployment to its previous version Solution:NO.30 Exhibit:ContextDevelopers occasionally need to submit pods that run periodically.TaskFollow the steps below to create a pod that will start at a predetermined time and]which runs to completion only once each time it is started:* Create a YAML formatted Kubernetes manifest /opt/KDPD00301/periodic.yaml that runs the following shell command: date in a single busybox container. The command should run every minute and must complete within 22 seconds or be terminated oy Kubernetes. The Cronjob namp and container name should both be hello* Create the resource in the above manifest and verify that the job executes successfully at least once  Solution:  Solution: NO.31 Exhibit:TaskCreate a new deployment for running.nginx with the following parameters;* Run the deployment in the kdpd00201 namespace. The namespace has already been created* Name the deployment frontend and configure with 4 replicas* Configure the pod with a container image of lfccncf/nginx:1.13.7* Set an environment variable of NGINX__PORT=8080 and also expose that port for the container above  Solution:  Solution: NO.32 Exhibit:ContextA container within the poller pod is hard-coded to connect the nginxsvc service on port 90 . As this port changes to 5050 an additional container needs to be added to the poller pod which adapts the container to connect to this new port. This should be realized as an ambassador container within the pod.Task* Update the nginxsvc service to serve on port 5050.* Add an HAproxy container named haproxy bound to port 90 to the poller pod and deploy the enhanced pod. Use the image haproxy and inject the configuration located at /opt/KDMC00101/haproxy.cfg, with a ConfigMap named haproxy-config, mounted into the container so that haproxy.cfg is available at /usr/local/etc/haproxy/haproxy.cfg. Ensure that you update the args of the poller container to connect to localhost instead of nginxsvc so that the connection is correctly proxied to the new service endpoint. You must not modify the port of the endpoint in poller’s args . The spec file used to create the initial poller pod is available in /opt/KDMC00101/poller.yaml  Solution:apiVersion: apps/v1kind: Deploymentmetadata:name: my-nginxspec:selector:matchLabels:run: my-nginxreplicas: 2template:metadata:labels:run: my-nginxspec:containers:– name: my-nginximage: nginxports:– containerPort: 90This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:kubectl apply -f ./run-my-nginx.yamlkubectl get pods -l run=my-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODEmy-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905mmy-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljydCheck your pods’ IPs:kubectl get pods -l run=my-nginx -o yaml | grep podIPpodIP: 10.244.3.4podIP: 10.244.2.5  Solution:apiVersion: apps/v1kind: Deploymentmetadata:name: my-nginxspec:selector:matchLabels:run: my-nginx– name: my-nginximage: nginxports:– containerPort: 90This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:kubectl apply -f ./run-my-nginx.yamlkubectl get pods -l run=my-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODEmy-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905mmy-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljydCheck your pods’ IPs:kubectl get pods -l run=my-nginx -o yaml | grep podIPpodIP: 10.244.3.4podIP: 10.244.2.5 NO.33 Exhibit:ContextA pod is running on the cluster but it is not responding.TaskThe desired behavior is to have Kubemetes restart the pod when an endpoint returns an HTTP 500 on the /healthz endpoint. The service, probe-pod, should never send traffic to the pod while it is failing. Please complete the following:* The application has an endpoint, /started, that will indicate if it can accept traffic by returning an HTTP 200. If the endpoint returns an HTTP 500, the application has not yet finished initialization.* The application has another endpoint /healthz that will indicate if the application is still working as expected by returning an HTTP 200. If the endpoint returns an HTTP 500 the application is no longer responsive.* Configure the probe-pod pod provided to use these endpoints* The probes should use port 8080  Solution:In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.When the container starts, it executes this command:/bin/sh -c “touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600”For the first 30 seconds of the container’s life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.Create the Pod:kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 1m  Solution:In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.When the container starts, it executes this command:/bin/sh -c “touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600”For the first 30 seconds of the container’s life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.Create the Pod:kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 1m NO.34 ContextContextYou sometimes need to observe a pod’s logs, and write those logs to a file for further analysis.TaskPlease complete the following;* Deploy the counter pod to the cluster using the provided YAMLspec file at /opt/KDOB00201/counter.yaml* Retrieve all currently available application logs from the running pod and store them in the file /opt/KDOB0020l/log_Output.txt, which has already been created Solution:NO.35 ContextAnytime a team needs to run a container on Kubernetes they will need to define a pod within which to run the container.TaskPlease complete the following:* Create a YAML formatted pod manifest/opt/KDPD00101/podl.yml to create a pod named app1 that runs a container named app1cont using image Ifccncf/arg-outputwith these command line arguments: -lines 56 -F* Create the pod with the kubect1 command using the YAML file created in the previous step* When the pod is running display summary data about the pod in JSON format using the kubect1 command and redirect the output to a file named /opt/KDPD00101/out1.json* All of the files you need to work with have been created, empty, for your convenience  Solution:  Solution: NO.36 Exhibit:ContextYou are tasked to create a secret and consume the secret in a pod using environment variables as follow:Task* Create a secret named another-secret with a key/value pair; key1/value4* Start an nginx pod named nginx-secret using container image nginx, and add an environment variable exposing the value of the secret key key 1, using COOL_VARIABLE as the name for the environment variable inside the pod  Solution:  Solution: NO.37 Exhibit:ContextIt is always useful to look at the resources your applications are consuming in a cluster.Task* From the pods running in namespace cpu-stress , write the name only of the pod that is consuming the most CPU to file /opt/KDOBG030l/pod.txt, which has already been created.  Solution:  Solution: NO.38 ContextTaskA Deployment named backend-deployment in namespace staging runs a web application on port 8081. Solution: Loading … Check the Free demo of our CKAD Exam Dumps with 33 Questions: https://www.dumpsmaterials.com/CKAD-real-torrent.html --------------------------------------------------- Images: https://exams.dumpsmaterials.com/wp-content/plugins/watu/loading.gif https://exams.dumpsmaterials.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-02-10 15:36:47 Post date GMT: 2023-02-10 15:36:47 Post modified date: 2023-02-10 15:36:47 Post modified date GMT: 2023-02-10 15:36:47