This page was exported from Free Exams Dumps Materials [ http://exams.dumpsmaterials.com ] Export date:Wed Dec 4 18:54:07 2024 / +0000 GMT ___________________________________________________ Title: [Dec 03, 2024] New Real CKAD Exam Dumps Questions [Q16-Q30] --------------------------------------------------- [Dec 03, 2024] New Real CKAD Exam Dumps Questions Pass Your CKAD Exam Easily with Accurate Linux Foundation Certified Kubernetes Application Developer Exam PDF Questions NO.16 ContextTask:A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.1) Look at the logs identify errors messages.Find errors, including User “system:serviceaccount:gorilla:default” cannot list resource “deployment” […] in the namespace “gorilla”2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.The buffalo-deployment ‘S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml Solution:NO.17 TaskA deployment is falling on the cluster due to an incorrect image being specified. Locate the deployment, and fix the problem. See the solution belowExplanationcreate deploy hello-deploy –image=nginx –dry-run=client -o yaml > hello-deploy.yaml Update deployment image to nginx:1.17.4: kubectl set image deploy/hello-deploy nginx=nginx:1.17.4NO.18 Task:Create a Deployment named expose in the existing ckad00014 namespace running 6 replicas of a Pod. Specify a single container using the ifccncf/nginx: 1.13.7 image Add an environment variable named NGINX_PORT with the value 8001 to the container then expose port8001 See the solution below.ExplanationSolution:Text Description automatically generatedText Description automatically generatedNO.19 Task:Update the Pod ckad00018-newpod in the ckad00018 namespace to use a NetworkPolicy allowing the Pod to send and receive traffic only to and from the pods web and db See the solution below.ExplanationSolution:NO.20 Refer to Exhibit.ContextYou are asked to prepare a Canary deployment for testing a new application release.Task:A Service named krill-Service in the goshark namespace points to 5 pod created by the Deployment named current-krill-deployment1) Create an identical Deployment named canary-kill-deployment, in the same namespace.2) Modify the Deployment so that:-A maximum number of 10 pods run in the goshawk namespace.-40% of the krill-service ‘s traffic goes to the canary-krill-deployment pod(s) Solution:NO.21 Exhibit:ContextYour application’s namespace requires a specific service account to be used.TaskUpdate the app-a deployment in the production namespace to run as the restrictedservice service account. The service account has already been created.  Solution:  Solution: NO.22 Task:1- Update the Propertunel scaling configuration of the Deployment web1 in the ckad00015 namespace setting maxSurge to 2 and maxUnavailable to 592- Update the web1 Deployment to use version tag 1.13.7 for the Ifconf/nginx container image.3- Perform a rollback of the web1 Deployment to its previous version See the solution below.ExplanationSolution:Text Description automatically generatedNO.23 Exhibit:ContextDevelopers occasionally need to submit pods that run periodically.TaskFollow the steps below to create a pod that will start at a predetermined time and]which runs to completion only once each time it is started:* Create a YAML formatted Kubernetes manifest /opt/KDPD00301/periodic.yaml that runs the following shell command: date in a single busybox container. The command should run every minute and must complete within 22 seconds or be terminated oy Kubernetes. The Cronjob namp and container name should both be hello* Create the resource in the above manifest and verify that the job executes successfully at least once  Solution:  Solution: NO.24 Exhibit:ContextA pod is running on the cluster but it is not responding.TaskThe desired behavior is to have Kubemetes restart the pod when an endpoint returns an HTTP 500 on the /healthz endpoint. The service, probe-pod, should never send traffic to the pod while it is failing. Please complete the following:* The application has an endpoint, /started, that will indicate if it can accept traffic by returning an HTTP 200. If the endpoint returns an HTTP 500, the application has not yet finished initialization.* The application has another endpoint /healthz that will indicate if the application is still working as expected by returning an HTTP 200. If the endpoint returns an HTTP 500 the application is no longer responsive.* Configure the probe-pod pod provided to use these endpoints* The probes should use port 8080  Solution:In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.When the container starts, it executes this command:/bin/sh -c “touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600”For the first 30 seconds of the container’s life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.Create the Pod:kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 1m  Solution:In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.When the container starts, it executes this command:/bin/sh -c “touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600”For the first 30 seconds of the container’s life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.Create the Pod:kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “k8s.gcr.io/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 1m NO.25 Exhibit:ContextIt is always useful to look at the resources your applications are consuming in a cluster.Task* From the pods running in namespace cpu-stress , write the name only of the pod that is consuming the most CPU to file /opt/KDOBG030l/pod.txt, which has already been created.  Solution:  Solution: NO.26 Task:A Dockerfile has been prepared at -/human-stork/build/Dockerfile1) Using the prepared Dockerfile, build a container image with the name macque and lag 3.0. You may install and use the tool of your choice.2) Using the tool of your choice export the built container image in OC-format and store it at -/human stork/macque 3.0 tar See the solution below. ExplanationSolution:NO.27 Refer to Exhibit.Task:A Dockerfile has been prepared at -/human-stork/build/Dockerfile1) Using the prepared Dockerfile, build a container image with the name macque and lag 3.0. You may install and use the tool of your choice.2) Using the tool of your choice export the built container image in OC-format and store it at -/human stork/macque 3.0 tar Solution:NO.28 Refer to Exhibit.Task:A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.1) Look at the logs identify errors messages.Find errors, including User “system:serviceaccount:gorilla:default” cannot list resource “deployment” […] in the namespace “gorilla”2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.The buffalo-deployment ‘S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml Solution:NO.29 ContextTask:Create a Pod named nginx resources in the existing pod resources namespace.Specify a single container using nginx:stable image.Specify a resource request of 300m cpus and 1G1 of memory for the Pod’s container. Solution:NO.30 Refer to Exhibit.Task:The pod for the Deployment named nosql in the craytisn namespace fails to start because its container runs out of resources.Update the nosol Deployment so that the Pod:1) Request 160M of memory for its Container2) Limits the memory to half the maximum memory constraint set for the crayfah name space. Solution: Loading … Updated CKAD Exam Practice Test Questions: https://www.dumpsmaterials.com/CKAD-real-torrent.html --------------------------------------------------- Images: https://exams.dumpsmaterials.com/wp-content/plugins/watu/loading.gif https://exams.dumpsmaterials.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2024-12-03 13:53:59 Post date GMT: 2024-12-03 13:53:59 Post modified date: 2024-12-03 13:53:59 Post modified date GMT: 2024-12-03 13:53:59