Docker vs Kubernetes
What is Docker?
Docker is a tool first released in 2013 that runs applications within virtual containers on a computer. Those containers have all the application needs in the case to run already-stored data on them. These containers are effortlessly ported to other computers through Docker’s use of images, which are saved positions of a container. An image of a Docker container will run the alike on any computer system you install it on, with no other configuration required.
Docker is beneficial during the development, testing, and deployment phases of a software project. Because the Docker container works the same in every situation, it simplifies the positioning of a software application into a broader environment.
Docker is an insubstantial containerization technology that has gained well-known popularity in the cloud and application packaging world. It is an open-source framework that computerizes the deployment of applications in insubstantial and portable containers. It uses a number of the Linux kernel’s topographies such as namespaces, cgroups, AppArmor profiles, and so on, to sandbox processes into configurable virtual situations. Though the thought of container virtualization isn’t new, it has been getting consideration lately with bigwigs like Red Hat, Microsoft, VMware, SaltStack, IBM, HP, etc, throwing their burden behind newcomer Docker.
What is Kubernetes?
Kubernetes is an open-source project for handling complex containerized applications. Initially developed by Google, it was first released in 2015. It’s now managed by an open-source software foundation (the Cloud Native Computing Foundation).
Whereas Docker controls the container for one or a few parts of a sole application, Kubernetes controls lots of containers together. Unlike Docker, Kubernetes isn’t a tool for managing containers during your development or testing phase. It’s more for making sure that your manufacturing containers are all running well when they’re in production.
Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open-source cluster organization system initially developed by three Google employees during the summer of 2014 and propagated exponentially and became the first project to get provided to the Cloud Native Computing Foundation(CNCF).
It is fundamentally an open-source toolkit for building a fault-tolerant, scalable platform deliberate to automate and centrally manage containerized applications. With Kubernetes, you can manage your containerized application more proficiently.
Difference between Kubernetes and Docker
Set up and installation
- It needs sequences of manual steps to setup Kubernetes Master and worker nodes components in a cluster of nodes
- Installing Docker is a problem of one-liner command on Linux platforms like Debian, Ubuntu, and CentOS.
- Kubernetes can run on several platforms: from your laptop to VMs on a cloud provider, to a rack of bare metal servers. For setting up a single node K8s cluster, one can use Minikube.
- To install a single-node Docker Swarm or Kubernetes cluster, one can install Docker for Mac & Docker for Windows.
- Kubernetes support for Windows server is under beta level.
- Docker has authorized support for Windows 10 and Windows Server 2016 and 1709.
- Kubernetes Client and Server packages need to be promoted physically on all the systems.
- It’s so easy to promote Docker Engine under Docker for Mac & Windows via just 1 click.
Working in two systems
- Kubernetes functions at the application level rather than at the hardware level. Kubernetes objectives to support a tremendously diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run boundless on Kubernetes.
- Kubernetes can run on top of Docker but needs you to know the command-line interface (CLI) specifications for both to access your data over the API.
- There is a Kubernetes client called kubectl which dialogs to kube API which is running on your master node.
- Unlike Master components that usually run on a single node (unless High Availability Setup is obviously stated), Node components run on each node.
- kubelet: agent running on the node to examine the container health and report to the master as well as listening to new commands from the kube-apiserver
- kube-proxy: maintains the network rules
- container runtime: software for administration of the containers (e.g. Docker, rkt, runc)
|Docker Platform is available in the form of two editions:
- Docker Community Edition
- Docker Enterprise Edition
- Docker Community comes with community-based support forums whereas Docker Enterprise Edition is offered as enterprise-class support with defined SLAs and private support channels.
- Docker Community and Enterprise Edition both come by defaulting with Docker Swarm Mode. Additionally, Kubernetes is supported under Docker Enterprise Edition.
- For Docker Swarm Mode, one can use Docker Compose file and use Docker Stack Deploy CLI to deploy an application across the cluster nodes.
- The `docker stack` CLI deploys a new stack or update a present stack. The client and daemon API must mutually be at least 1.25 to use this command. One can use the docker version command on the client to check your client and daemon API versions
Logging and Monitoring
- Kubernetes delivers no native storage solution for log data, but you can incorporate many existing logging solutions into your Kubernetes cluster. Few popular logging tools are listed below:
- Fluentd is an open-source data gatherer for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture.
- It helps to gather, route, and stock different logs from different sources. While Fluentd is heightened to be easily extended using a plugin architecture, fluent-bit is designed for performance.
- It’s compact and written in C so it can be assisted to minimalistic IoT devices and remain fast adequate to transfer a huge quantity of logs. Moreover, it has built-in Kubernetes backing. It’s an especially compact tool designed to transference logs from all nodes.
- Other tools like Stackdriver logging provided by GCP, Logz.io, and other 3rd party drivers are accessible too.
- Logging driver plugins are existing in Docker 17.05 and higher. Logging capabilities existing in Docker are exposed in the form of drivers, which is very handy since one gets to choose how and where log messages should be shipped
- Docker includes multiple logging mechanisms to help you get info from running containers and services. These contrivances are called logging drivers.
- Each Docker daemon has a defaulting logging driver, which each container uses unless you configure it to use a different logging driver.
- In addition to spending the logging drivers included with Docker, you can also implement and use logging driver plugins.
- To configure the Docker daemon to defaulting to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts.
- The following illustration explicitly sets the default logging driver to Syslog:
- There are various open-source tools accessible for Kubernetes application monitoring like:
- Heapster: Installed as a pod inside of Kubernetes, it collects data and events from the containers and pods within the cluster.
- Prometheus: Open-source Cloud Native Computing Foundation (CNCF) project that suggests powerful querying capabilities, visualization, and alerting.
- Grafana: Used in conjunction with Heapster for visualizing data within your Kubernetes atmosphere.
- InfluxDB: A highly-available database platform that stocks the data captured by all the Heapster pods.
- CAdvisor: focuses on container level presentation and resource usage. This comes embedded straight into kubelet and should automatically discover active containers.
- When you start a container, you can organize it to use a different logging driver than the Docker daemon default, using the –log-driver flag. If the logging driver has configurable options, you can set them using one or more occasions of the –log-opt <NAME>=<VALUE> flag. Even if the container uses the default logging driver, it can use changed configurable options.
According to Docker’s blog post on scaling Swarm clusters published in Nov 2015, Docker Swarm has been scaled and presented tested up to 30,000 containers and 1,000.
- Discovery backend: Consul
- 1,000 nodes
- 30 containers per node
- Manager: AWS m4.xlarge (4 CPUs, 16GB RAM)
- Nodes: AWS t2.micro (1 CPU, 1 GB RAM)
- Container image: Ubuntu 14.04
Percentile API Response Time Scheduling Delay
50th 150ms 230ms
90th 200ms 250ms
99th 360ms 400ms
As per the official page of Kubernetes certification K8s v1.12 support clusters with up to 5000 nodes based on the below criteria:
- No more than 5000 nodes
- No more than 150000 total pods
- No more than 300000 total containers
- No more than 100 pods per node.