In my particular case, the amount of logs we generate is around 1TB a month due to the size of the cluster, and the resource allocation for that single pod to work with that amount of logs was significant.
No outages, no missing information.īut for those users relying on a third-party logging solution, changing to containerd would break the integration. Behind the scenes, the monitoring agents were upgraded along with the clusters to start using containerd as a source for logs. If you would deploy a new cluster in version 1.20, you wouldn't notice that something has changed. With this, their native tooling to export logs to their own logging services was properly migrated.
#DOCKER SYSLOG SIZE UPGRADE#
Most cloud providers of Kubernetes (GKE, EKS, or AKS) managed this upgrade by defaulting the new cluster's runtime to containerd. A small sentence in the blog article calls out that a critical component would be affected: logging.ĭocker was not the only container runtime at the time of this change. Although this change didn't affect the core functionality of Kubernetes, or how pods work in their clusters, there were users that relied on resources provided by the Docker engine. You might have heard that starting version 1.20, Docker is no longer the container runtime in Kubernetes.