Abstract | Containerization has become a new approach that facilitates application deployment and delivers scalability, productivity, security, and portability. As a first promising platform, Docker was proposed in 2013 to automate the deployment of applications. There are many advantages of Docker for delivering cloud native services. However, its widespread use has revealed problems such as performance overhead. In order to deal with those problems, Kubernetes was introduced in 2015 as a container orchestration platform to simplify the management of containers. Kubernetes simplifies managing a large scale number of docker containers, however, the fairness is a missing point in the Kubernetes that has been applied in other platforms such as Apache Hadoop, YARN and Mesos. Assigning resource limits fairly among the pods in kubernetes becomes a challenging issue as some applications may require intensive resources such as CPU and memory that should be maximized to satisfy them. In order to do that, in this paper, we practice a novel way to assign resource limits fairly among the pods in the Kubernetes environment. |
---|