7.1. Kubernetes Issues At Scale 900 Minions¶
7.1.1. Glossary¶
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
fuel-ccp: CCP stands for “Containerized Control Plane”. The goal of this project is to make building, running and managing production-ready OpenStack containers on top of Kubernetes an easy task for operators.
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.
7.1.2. Setup¶
We had about 181 bare metal machines, 3 of them were used for Kubernetes control plane services placement (API servers, ETCD, Kubernetes scheduler, etc.), others had 5 virtual machines on each node, where every VM was used as a Kubernetes minion node.
Each bare metal node has the following specifications:
HP ProLiant DL380 Gen9
CPU - 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
RAM - 264G
Storage - 3.0T on RAID on HP Smart Array P840 Controller, HDD - 12 x HP EH0600JDYTL
Network - 2x Intel Corporation Ethernet 10G 2P X710
Running OpenStack cluster (from Kubernetes point of view) is represented with the following numbers:
OpenStack control plane services are running within ~80 pods on 6 nodes
~4500 pods are spread across all remaining nodes, 5 pods on each.
7.1.3. Kubernetes architecture analysis obstacles¶
During the 900 nodes tests we used Prometheus monitoring tool for the verification of the resources consumption and the load put on core system, Kubernetes and OpenStack levels services. During one of the Prometheus configuration optimisations old data from the Prometheus storage was deleted to improve Prometheus API speed, and this old data included 900 nodes cluster information, therefore we have only partial data being available for the post run investigation. This fact, although, does not influence overall reference architecture analysis, as all issues, that were observed during the containerized OpenStack setup testing, were thoughtfully documented and debugged.
To prevent monitoring data loss in future (Q1 2017 timeframe and further) we need to proceed with the following improvements of the monitoring setup:
Prometheus by default is more optimized to be used as real time monitoring / alerting system, and there is an official recommendation from Prometheus developers team to keep monitoring data retention for about 15 days to keep tool working in quick and responsive manner. To keep old data for the post-usage analytics purposes external store requires to be configured.
We need to reconfigure monitoring tool (Prometheus) to include data backup to one of the persistent time series databases (e.g. InfluxDB / Cassandra / OpenTSDB) that’s supported as an external persistent data store by Prometheus. This will allow us to store old data for extended amount of time for post-processing needs.
7.1.4. Observed issues¶
7.1.4.1. Huge load on kube-apiserver¶
7.1.4.1.1. Symptoms¶
Both API servers, running in Kubernetes cluster, were utilising up to 2000% of CPU (up to 45% of total node compute performance capacity) after we migrated them to hardware nodes. Initial setup with all nodes (including Kubernetes control plane nodes) running on virtualized environment was showing not workable API servers at all.
7.1.4.1.2. Root cause¶
All services that are placed not on Kubernetes masters (kubelet
,
kube-proxy
on all minions) access kube-apiserver
via local
ngnix
proxy.
Most of those requests are watch requests that stay mostly idle after
they are initiated (most timeouts on them are defined to be about 5-10
minutes). nginx
was configured to cut idle connections in 3 seconds,
which makes all clients to reconnect and (the worst) restart aborted SSL
session. On the server side it makes kube-apiserver
consume up to 2000%
CPU resources and other requests become very slow.
7.1.4.1.3. Solution¶
Set proxy_timeout
parameter to 10 minutes in nginx.conf
config
file, which should be more than enough not to cut SSL connections before
requests time out by themselves. After this fix was applied, one
api-server became to consume 100% of CPU (about 2% of total node compute
performance capacity), the second one about 200% (about 4% of total node
compute performance capacity) of CPU (with average response time 200-400
ms).
7.1.4.1.4. Upstream issue (fixed)¶
Make Kargo deployment tool set proxy_timeout
to 10 minutes:
issue
fixed with pull request
by Fuel CCP team.
7.1.4.2. KubeDNS cannot handle big cluster load with default settings¶
7.1.4.2.1. Symptoms¶
When deploying OpenStack cluster on this scale, kubedns
becomes
unresponsive because of high load. This end up with very often error
appearing in logs of dnsmasq
container in kubedns
pod:
Maximum number of concurrent DNS queries reached.
Also dnsmasq
containers sometimes get restarted due to hitting high
memory limit.
7.1.4.2.2. Root cause¶
First of all, kubedns
seems to fail often on high load (or even without
load), during the experiment we observed continuous kubedns container
restarts even on empty (but big enough) Kubernetes cluster. Restarts
are caused by liveness check failing, although nothing notable is
observed in any logs.
Second, dnsmasq
should have taken load off kubedns
, but it needs some
tuning to behave as expected for big load, otherwise it is useless.
7.1.4.2.3. Solution¶
This requires several levels of fixing:
Set higher limits for
dnsmasq
containers: they take on most of the load.Add more replicas to
kubedns
replication controller (we decided to stop on 6 replicas, as it solved the observed issue - for bigger clusters it might be needed to increase this number even more).Increase number of parallel connections
dnsmasq
should handle (we used--dns-forward-max=1000
which is recommendaed parameter setup indnsmasq
manuals)Increase size of cache in
dnsmasq
: it has hard limit of 10000 cache entries which seems to be reasonable amount.Fix
kubedns
to handle this behaviour in proper way.
7.1.4.2.4. Upstream issues (partially fixed)¶
#1 and #2 are fixed by making them configurable in Kargo by Kubernetes team: issue, pull request.
Other fixes are still being implemented as of time of this publication.
7.1.4.3. Kubernetes scheduler is ineffective with pod antiaffinity¶
7.1.4.3.1. Symptoms¶
It takes significant amount of time for scheduler to process pods with pod antiaffinity rules specified on them. It is spending about 2-3 seconds on each pod which makes time needed to deploy OpenStack cluster on 900 nodes unexpectedly long (about 3h for just scheduling). Antiaffinity rules are required to be used for OpenStack deployment to prevent several OpenStack compute nodes to be mixed and messed to one Kubernetes Minion node.
7.1.4.3.2. Root cause¶
According to profiling results, most of the time is spent on creating new Selectors to match existing pods against them, which triggers validation step. Basically we have O(N^2) unnecessary validation steps (N - number of pods), even if we have just 5 deployments entities covering most of the nodes.
7.1.4.3.3. Solution¶
Specific optimization that speeds up scheduling time up to about 300 ms/pod was required in this case. It’s still slow in terms of common sense (about 30m spent just on pods scheduling for 900 nodes OpenStack cluster), but is close to be reasonable. This solution lowers number of very expensive operations to O(N), which is better, but still depends on number of pods instead of deployments, so there is space for future improvement.
7.1.4.3.4. Upstream issues¶
Optimization merged into master: pull request; backported to 1.5 branch (will release in 1.5.2 release): pull request.
7.1.4.4. Kubernetes scheduler needs to be deployed on separate node¶
7.1.4.4.1. Symptoms¶
During huge OpenStack cluster deployment against pre-deployed
Kubernetes scheduler
, controller-manager
and apiserver
start
competing for CPU cycles as all of them get big load. Scheduler is more
resource-hungry (see next problem), so we need a way to deploy it
separately.
7.1.4.4.2. Root Cause¶
The same problem with Kubsernetes scheduler efficiency at scale of about 1000 nodes as in the issue above.
7.1.4.4.3. Solution¶
Kubernetes scheduler was moved to a separate node manually, all other schedulers were manually killed to prevent them from moving to other nodes.
7.1.4.4.4. Upstream issues¶
Issue created in Kargo installer Github repository.
7.1.4.5. kube-apiserver have low default rate limit¶
7.1.4.5.1. Symptoms¶
Different services start receiving “429 Rate Limit Exceeded” HTTP error
even though kube-apiservers
can take more load. It is linked to a
scheduler bug (see below).
7.1.4.5.2. Solution¶
Raise rate limit for kube-apiserver process
via --max-requests-inflight
option. It defaults to 400, in our case it became workable at 2000. This
number should be configurable in Kargo deployment tool, as for bigger
deployments it might be required to increase it accordingly.
7.1.4.5.3. Upstream issues¶
Upstream issue or pull request was not created for this issue.
7.1.4.6. Kubernetes scheduler can schedule wrongly¶
7.1.4.6.1. Symptoms¶
When many pods are being created (~4500 in our case of OpenStack
deployment) and faced with 429 error from kube-apiserver
(see above),
the scheduler can schedule several pods of the same deployment on one node
in violation of pod antiaffinity rule on them.
7.1.4.6.2. Root cause¶
This issue arises due to scheduler cache being evicted before the pod actually processed.
7.1.4.6.3. Upstream issues¶
Pull request accepted in Kubernetes upstream.
7.1.4.7. Docker become unresponsive at random¶
7.1.4.7.1. Symptoms¶
Docker process sometimes hangs on several nodes, which results in
timeouts in kubelet
logs and pods cannot be spawned or terminated
successfully on the affected minion node. Although bunch of similar
issues has been fixed in Docker since 1.11, we still are observing those
symptoms.
7.1.4.7.2. Workaround¶
Docker daemon logs does not contain any notable information, so we had to restart docker service on the affected node (during those experiments we used Docker 1.12.3, but we have observed similar symptoms in 1.13 as well).
7.1.4.8. Calico start up time is too long¶
7.1.4.8.1. Symptoms¶
If we have to kill a Kubernetes node, Calico requires ~5 minutes to reestablish all mesh connections.
7.1.4.8.2. Root cause¶
Calico uses BGP, so without route reflector it has to do full-mesh between all nodes in cluster.
7.1.4.8.3. Solution¶
We need to switch to using route reflectors in our clusters. Then every node needs only to establish connections to all reflectors.
7.1.4.8.4. Upstream Issues¶
None. For production use, architecutre of Calico network should be adjusted to use route reflectors set up on selected nodes or on switching fabric hardware. This will reduce the number of BGP connections per node and speed up the Calico startup.
7.1.5. Contributors¶
The following people have credits for contributing to this document:
Dina Belova <dbelova@mirantis.com>
Yuriy Taraday <ytaraday@mirantis.com>