- TTL Controller for Finished Resources
- TTL Controller
- Caveat
- Updating TTL Seconds
- Time Skew
- What's next
- Feedback
TTL Controller for Finished Resources
FEATURE STATE: Kubernetes v1.12
alphaThis feature is currently in a alpha state, meaning:
- The version names contain alpha (e.g. v1alpha1).
- Might be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
The TTL controller provides a TTL mechanism to limit the lifetime of resourceobjects that have finished execution. TTL controller only handlesJobs fornow, and may be expanded to handle other resources that will finish execution,such as Pods and custom resources.
Alpha Disclaimer: this feature is currently alpha, and can be enabled withfeature gateTTLAfterFinished
.
TTL Controller
The TTL controller only supports Jobs for now. A cluster operator can use this feature to cleanup finished Jobs (either Complete
or Failed
) automatically by specifying the.spec.ttlSecondsAfterFinished
field of a Job, as in thisexample.The TTL controller will assume that a resource is eligible to be cleaned upTTL seconds after the resource has finished, in other words, when the TTL has expired. When theTTL controller cleans up a resource, it will delete it cascadingly, i.e. deleteits dependent objects together with it. Note that when the resource is deleted,its lifecycle guarantees, such as finalizers, will be honored.
The TTL seconds can be set at any time. Here are some examples for setting the.spec.ttlSecondsAfterFinished
field of a Job:
- Specify this field in the resource manifest, so that a Job can be cleaned upautomatically some time after it finishes.
- Set this field of existing, already finished resources, to adopt this newfeature.
- Use amutating admission webhookto set this field dynamically at resource creation time. Cluster administrators canuse this to enforce a TTL policy for finished resources.
- Use amutating admission webhookto set this field dynamically after the resource has finished, and choosedifferent TTL values based on resource status, labels, etc.
Caveat
Updating TTL Seconds
Note that the TTL period, e.g. .spec.ttlSecondsAfterFinished
field of Jobs,can be modified after the resource is created or has finished. However, once theJob becomes eligible to be deleted (when the TTL has expired), the system won’tguarantee that the Jobs will be kept, even if an update to extend the TTLreturns a successful API response.
Time Skew
Because TTL controller uses timestamps stored in the Kubernetes resources todetermine whether the TTL has expired or not, this feature is sensitive to timeskew in the cluster, which may cause TTL controller to clean up resource objectsat the wrong time.
In Kubernetes, it’s required to run NTP on all nodes(see #6159)to avoid time skew. Clocks aren’t always correct, but the difference should bevery small. Please be aware of this risk when setting a non-zero TTL.
What's next
Clean up Jobs automatically
Design doc
Feedback
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.