Rocky Series Release Notes¶
0.5.0¶
New Features¶
Kuryr-Kubernetes now supports running kuryr-controller service in Active/Passive HA mode. This is only possible when running those services as Pods on Kubernetes cluster, as Kubernetes is used for leader election. Also it is required to add leader-elector container to the kuryr-controller Pods. HA is controlled by
[kubernetes]controller_ha
option, which defaults toFalse
.
An OpenShift route is a way to expose a service by giving it an externally-reachable hostname like www.example.com. A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity that allows external clients to reach your applications. Each route consists of a route name , target service details. To enable it the following handlers should be added :
[kubernetes] enabled_handlers=vif,lb,lbaasspec,ingresslb,ocproute
The CNI daemon now provides health checks allowing the deployer or the orchestration layer to probe it for readiness and liveness.
These health checks are served and executed by a Manager that runs as part of CNI daemon, and offers two endpoints indicating whether it is ready and alive.
The Manager validates presence of NET_ADMIN capabilities, health status of a transactional database, connectivity with Kubernetes API, quantity of CNI add failures, health of CNI components and amount of memory being consumed. The health checks fails if any of the presented checks are not validated, causing the orchestration layer to restart. More information can be found in the kuryr-kubernetes documentation.
Introduced a pluggable interface for the Kuryr controller handlers. Each Controller handler associates itself with specific Kubernetes object kind and is expected to process the events of the watched Kubernetes API endpoints. The pluggable handlers framework enable both using externally provided handlers in Kuryr Controller and controlling which handlers should be active.
To control which Kuryr Controller handlers should be active, the selected handlers need to be included at the kuryr.conf at the ‘kubernetes’ section. If not specified, Kuryr Controller will run the default handlers. For example, to enable only the ‘vif’ controller handler we should set the following at kuryr.conf:
[kubernetes] enabled_handlers=vif
Adds a new multi pool driver to support hybrid environments where some nodes are Bare Metal while others are running inside VMs, therefore having different VIF drivers (e.g., neutron and nested-vlan)
This new multi pool driver is the default pool driver used even if a different vif_pool_driver is set at the config option. However if the configuration about the mappings between the different pools and pod vif drivers is not provided at the pools_vif_drivers config option of vif_pool configuration section only one pool driver will be loaded – using the standard vif_pool_driver and pod_vif_driver config options, i.e., using the one selected at kuryr.conf options.
To enable the option of having different pools depending on the node’s pod vif types, you need to state the type of pool that you want for each pod vif driver, e.g.:
[vif_pool] pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif
This will use a pool driver nested to handle the pods whose vif driver is nested-vlan, and a pool driver neutron to handle the pods whose vif driver is neutron-vif. When the controller is requesting a vif for a pod in node X, it will first read the node’s annotation about pod_vif driver to use, e.g., pod_vif: nested-vlan, and then use the corresponding pool driver – which has the right pod-vif driver set.
Note that if no annotation is set on a node, the default pod_vif_driver is used.
Introduced a new subnet driver that is able to create a new subnet (including the network and its connection to the router) for each namespace creation event.
To enable it the namespace subnet driver must be selected and the namespace handler needs to be enabled:
[kubernetes] enabled_handlers=vif,lb,lbaasspec,namespace pod_subnets_driver = namespace
Migrated all upstream gates to Zuul V3 [1] native format. This commit also introduces several new (for now) experimental gates such as multinode and centos-7 based. These will be moved to check and voting once they have been behaving at a stable pace for some time.
Upgrade Notes¶
Legacy Kuryr deployment without running kuryr-daemon is now considered deprecated. That possibility will be completely removed in one of the next releases. Please note that this means that
[cni_daemon]daemon_enabled
option will default toTrue
.
Legacy Kuryr deployment relying on neutron-lbaas as the LBaaSv2 endpoint is now deprecated. The possibility of using it as Kuryr’s lbaasv2 endpoint will be totally removed in one of the next releases.
For the kuryr kubernetes watcher, a new option ‘watch_retry_timeout’ has been added. The following should be modified at kuryr.conf:
[kubernetes] # 'watch_retry_timeout' field is optional, # default = 60 if not set. watch_retry_timeout = <Time in seconds>
For the external services (type=LoadBalancer) case, a new field ‘external_svc_net’ was added and the ‘external_svc_subnet’ field become optional. The following should be modified at kuryr.conf:
[neutron_defaults] external_svc_net= <id of external network> # 'external_svc_subnet' field is optional, set this field in case # multiple subnets attached to 'external_svc_net' external_svc_subnet= <id of external subnet>
As the openstack performance differs in production environments, fixed timeout of LBaaS activation might create the kuryr-kubernetes error. In order to adapt to the environment, a new option
[neutron_defaults]lbaas_activation_timeout
was added.
Deprecation Notes¶
Running Kuryr-Kubernetes without kuryr-daemon service is now deprecated. Motivations for that move include:
Discoveries of bugs that are much easier to fix in kuryr-daemon.
Further improvements in Kuryr scalability (e.g. moving choosing VIF from pool into kuryr-daemon) are only possible when kuryr-daemon is present.
Possibility of running Kuryr-Kubernetes without kuryr-daemon will be removed in one of the future releases.
Running Kuryr-Kubernetes with neutron-lbaasv2 is now deprecated. The main motivation for this is the deprecation of the neutron-lbaas implementation in favour to Octavia.
Possibility of running Kuryr-Kubernetes with the lbaas handler pointing to anything but Octavia or SDN lbaas implementations will be removed in future releases.
Bug Fixes¶
K8s api server is often temporarily down and restored soon in production environment. Since kuryr-kubernetes watches k8s resources by connecting k8s api server, watcher fails to watch the resources if k8s api server is down. In order to fix it, we made watcher retry connecting to k8s api server for specific time duration when an exception is raised.
It is very common for production environments to only allow access to the public network and not the associated public subnets. In that case, we fail to allocate a floating IP to the Loadbalancer service type. In order to fix it, we added an option for specifying the network id instead and switch the subnet config option to being optional.