kubectl delete stuck - what to do and why this happen| Devops Junction

In this article, we are going to see what to do when your kubectl delete command stuck.

When you are trying to delete Kubernetes resources such as

  • pod
  • job
  • ingress
  • PVC - Persistent Volume Claim
  • PV - Persistent Volume

you might notice sometimes that your kubectl delete pod|job|ingress|pvc is stuck.

the resource would not be actually deleted. Let us learn why this happens and how to solve it.

kubectl delete stuck

What to do when your kubectl delete stuck

When you kubectl delete command stuck, You can execute the following command and you can see your kubectl delete command/task would be now completed ( if it was stuck at deleting)

 

kubectl patch <pod|job|ingress|pvc> <name-of-resource> \
-p '{"metadata":{"finalizers":[]}}' – type=merge

The preceding command is to simply patch the corresponding resource you are trying to delete and which is stuck.

But remember this is a workaround only. To understand the underlying reason why it stuck. continue to read.

If you are trying to delete an ingress you need to just use the following command

kubectl patch ingress nameofingress -p '{"metadata":{"finalizers":[]}}' – type=merge

If you are trying to delete a Persistent Volume Claim and it is stuck you can patch that resource like this

kubectl patch pvc nameofpvc -p '{"metadata":{"finalizers":[]}}' – type=merge

similarly, you can frame your command for other resources like Persistent Volumes and even pods.

Now let us see why this works and what this is all about

 

How it works / What are we doing

Kubernetes has its own way of managing memory and resources so does its own Garbage Collection System.

Unless you are new to computers and programming, you might have heard the word Garbage Collection or simply GC already

It is a systematic way to remove unused/unutilized space.  Programming Languages like Java/GO and the Servers built on them all have this process to ensure the memory is managed optimally.

Now, Kubernetes being the modern solution, It does manage its resources and perform Garbage collections when the resources are deleted and in various other contexts too.

Now coming back to our Kubectl delete and why we are talking about Garbage Collection

Kubernetes adds a special tag or annotation to the resource called Finalizers when it creates resources which have multiple dependencies

This is how Kubernetes documentation defines Finalizer

Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned.

Simply put, Finalizers are the keys to tell Kubernetes API  that there are few resources to be deleted or taken care of before this particular resource is deleted.

For example. When you are trying to delete a Persistent Volume the associated persistent volume claims and the pods bound to it would be affected so you must follow the proper order

When you are trying to delete an Ingress it might be associated with some infrastructure items like Load Balancers and TargetGroups which needs to be deleted first before ingress is let go.

In case we are not deleting them in the right order. we would have lot of unused items in our infrastructure and in the Kubernetes cluster too.

But why Finalizers now ?  How it is related to what we did

If you look at the command once again you can see that we are simply patching or updating the finalizers with an empty array or list

-p '{"metadata":{"finalizers":[]}}' – type=merge

By doing this, we are simply resetting the finalizers or emptying it

If we do that, Kubernetes API would ignore the dependencies as it is and continue with the deletion

But of course, at the cost of leaving the dependency resource stranded,

This should be manually cleaned up later or it would be there for a decade in your Infrastructure and you would be paying for it.

I remember this famous meme. Hope you can relate to it.

Hutch (Sociosploit) on Twitter: "No joke. I learned this lesson the hard way— don't forget to power down your AWS GPU password cracker. The P3.16xLarge runs at $17,625 for the month 😬. #

 

What is the recommended way when your kubectl delete is stuck

In most cases, we run out of patience with our kubectl delete  and go for the kubectl patch solution we gave earlier.

But we need to understand that it has some final clean-ups to do and it takes time.

for example, When you delete ingress, you need to delete the target group and the load balancer etc before we can delete the ingress object on Kubernetes.

So the first recommendation I can give is to wait for a while. When you really feel that it has taken a more than few minutes.

the second best recommendation is really simple and we do it in real life too, when things are stuck, we just have to find out why and unstuck/clear it.

that's what we need to do in this context too. Find why it is taking time and check the logs of the resource to look for any clues

kubernetes describe <pvc|pv|ingress|job> name-of-the-resource

In most cases, you would find out why it fails from the log itself.

If you find no errors and everything seems perfect it is still blocked.

You know, just kill it with kubectl patch  Life is too short to spend on a single task anyway 😀

Hope this article helps

Cheers
Sarav AK

Follow me on Linkedin My Profile
Follow DevopsJunction onFacebook orTwitter
For more practical videos and tutorials. Subscribe to our channel

Buy Me a Coffee at ko-fi.com

Signup for Exclusive "Subscriber-only" Content

Loading