Rocky Series Release Notes¶
18.1.15¶
New Features¶
Add the possibility to disable openrc v2 download in the dashboard. new var
horizon_show_keystone_v2_rc
can be set toFalse
to remove the entry for the openrc v2 download.
18.1.8¶
New Features¶
A new option has been added allowing deployers to disable any and all containers on a given host. The option no_containers is a boolean which, if undefined, will default to false. This option can be added to any host in the openstack_user_config.yml or via an override in conf.d. When this option is set to true the given host will be treated as a baremetal machine. The new option mirrors the existing environmental option is_metal but allows deployers to target specific hosts instead of entire groups.
log_hosts: infra-1: ip: 172.16.24.2 no_containers: true
Upgrade Notes¶
The variable
tempest_image_dir_owner
is removed in favour of using default ansible user to create the image directory.
The data structure for
ceph_gpg_keys
has been changed to be a list of dicts, each of which is passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
ceph_gpg_keys
have been changed for all supported platforms and now use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
A new value
epel_gpg_keys
can be overridden to use a different GPG key for the EPEL-7 RPM package repo instead of the vendored key used by default.
18.1.6¶
New Features¶
Adding support for Mistral to be built as part of the repo build process.
Adding the
os-mistral-install.yml
file to deploy mistral to hosts tagged with hostgroupmistral_all
The list of enabled filters for the Cinder scheduler, scheduler_default_filters in cinder.conf, could previously be defined only via an entry in
cinder_cinder_conf_overrides
. You now have the option to instead define a list variable,cinder_scheduler_default_filters
, that defines the enabled filters. This is helpful if you either want to disable one of the filters enabled by default (at the time of writing, these are AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter), or if conversely you want to add a filter that is normally not enabled, such as DifferentBackendFilter or InstanceLocalityFilter.For example, to enable the InstanceLocalityFilter in addition to the normally enabled scheduler filters, use the following variable.
cinder_scheduler_default_filters: - AvailabilityZoneFilter - CapacityFilter - CapabilitiesFilter - InstanceLocalityFilter
Deprecation Notes¶
There was previously an environment variable (
ANSIBLE_ROLE_FETCH_MODE
) to set whether the roles in ansible-role-requirements.yml were fetched using ansible-galaxy or using git, however the default has been git for some time ansible since the use of theceph-ansible
respoitory for ceph deployment, using ansible-galaxy to download the roles does not work properly. This functionality has therefore been removed.
18.1.5¶
New Features¶
Deployers can now define a cinder-backend volume type explicitly private or public with option
public
set to true or false.
Known Issues¶
The number of inotify watch instances available is limited system wide via a sysctl setting. It is possible for certain processes, such as pypi-server, or elasticsearch from the ops repo to consume a large number of inotify watches. If the system wide maximum is reached then any process on the host or in any container on the host will be unable to create a new inotify watch. Systemd uses inotify watches, and if there are none available it is unable to restart services. The processes which synchronise the repo server contents between infra nodes also relies on inotify watches. If the repo servers fail to synchronise, or services fail to restart when expected check the the inotify watch limit which is defined in the sysctl value fs.inotify.max_user_watches. Patches have merged to increase these limits, but for existing environments or those which have not upgraded to a recent enough point release may have to apply an increased limit manually.
Bug Fixes¶
Fixes neutron HA routers, by enabling
neutron-l3-agent
to invoke the required helper script.
18.1.4¶
New Features¶
It is now possible to modify the NTP server options in chrony using
security_ntp_server_options
.
Chrony got a new configuration option to synchronize the system clock back to the RTC using the
security_ntp_sync_rtc
variable. Disabled by default.
Upgrade Notes¶
The data structure for
galera_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
galera_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
The data structure for
rabbitmq_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
rabbitmq_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
Deprecation Notes¶
The following variable name changes have been implemented in order to better reflect their purpose.
lxc_host_machine_quota_disabled
->lxc_host_btrfs_quota_disabled
lxc_host_machine_qgroup_space_limit
->lxc_host_btrfs_qgroup_space_limit
lxc_host_machine_qgroup_compression_limit
->lxc_host_btrfs_qgroup_compression_limit
Bug Fixes¶
When using LXC containers with a copy-on-write back-end, the
lxc_hosts
role execution would fail due to undefined variables with thenspawn_host_
prefix. This issue has now been fixed.
In https://review.openstack.org/582633 an adjustment was made to the
openstack-ansible
wrapper which mistakenly changed the intended behaviour. The wrapper is only meant to include the extra-vars and invoke the inventory ifansible-playbook
was executed from inside theopenstack-ansible
repository clone (typically/opt/openstack-ansible
), but the change made the path irrelevant. This has now been fixed -ansible-playbook
andansible
will only invoke the inventory and include extra vars if it is invoked from inside the git clone path.
18.1.3¶
New Features¶
It is now possible to use NFS mountpoints with the role by using the nova_nfs_client variable, which is useful for using NFS for instance data and saves.
Upgrade Notes¶
The data structure for
galera_client_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
galera_client_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
18.1.1¶
New Features¶
This role now optionally enables your compute nodes’ KVM kernel module nested virtualization capabilities, by setting nova_nested_virt_enabled to true. Depending on your distribution and libvirt version, you might need to set additional variables to fully enabled nested virtualization. For details, please see https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#nested-guest-support.
You can now set the Libvirt CPU model and feature flags from the appropriate entry under the
nova_virt_types
dictionary variable (normallykvm
).nova_cpu_model
is a string value that sets the CPU model; this value is ignored if you set anynova_cpu_mode
other thancustom
.nova_cpu_model_extra_flags
is a list that allows you to specify extra CPU feature flags not normally passed through withhost-model
, or thecustom
CPU model of your choice.
Upgrade Notes¶
If your configuration previously set the
libvirt/cpu_model
and/orlibvirt/cpu_model_extra_flags
variables in anova_nova_conf_overrides
dictionary, you should consider moving those tonova_cpu_model
andnova_cpu_model_extra_flags
in the appropriate entry (normallykvm
) in thenova_virt_types
dictionary.
Bug Fixes¶
With the release of CentOS 7.6, deployments were breaking and becoming very slow when we restart dbus in order to catch some PolicyKit changes. However, those changes were never actaully used so they were happening for no reason. We no longer make any modifications to the systemd-machined configuration and/or PolicyKit to maintain upstream compatibility.
18.1.0¶
New Features¶
Horizon has, since OSA’s inception, been deployed with HTTPS access enabled, and has had no way to turn it off. Some use-cases may want to access via HTTP instead, so this patch enables the following.
Listen via HTTPS on a load balancer, but via HTTP on the horizon host and have the load balancer forward the correct headers. It will do this by default in the integrated build due to the presence of the load balancer, so the current behaviour is retained.
Enable HTTPS on the horizon host without a load balancer. This is the role’s default behaviour which matches what it always has been.
Disable HTTPS entirely by setting
haproxy_ssl: no
(which will also disable https on haproxy. This setting is inherited by the newhorizon_enable_ssl
variable by default. This is a new option.
Known Issues¶
The original
stable/rocky
release (18.0.x) contained a reference to a pre-release state of the ceph-ansible role (its 3.2.0beta1 release). That role — and hence, any OpenStack-Ansiblestable/rocky
release prior to 18.1.0 — should not be used to deploy production Ceph clusters. As of this release,stable/rocky
tracks the ceph-ansible role’sstable-3.1
branch.
Upgrade Notes¶
Configurations using the
ceph-ansible
role (that is, those applying theceph-install.yml
orceph-rgw-install.yml
playbooks) should be very carefully reviewed if you are upgrading from priorstable/rocky
releases. Those releases shipped a pre-release version ofceph-ansible
that was unintentionally included inansible-role-requirements.yml
.
The variables
ceilometer_oslomsg_rpc_servers
andceilometer_oslomsg_notify_servers
have been removed in favour of usingceilometer_oslomsg_rpc_host_group
andceilometer_oslomsg_notify_host_group
instead.
Deprecation Notes¶
The package cache on the repo server has been removed. If caching of packages is desired, it should be setup outside of OpenStack-Ansible and the variable
lxc_container_cache_files
(for LXC containers) ornspawn_container_cache_files_from_host
(for nspawn containers) can be used to copy the appropriate host configuration from the host into the containers on creation. Alternatively, environment variables can be set to use the cache in the host /etc/environment file prior to container creation, or thedeployment_environment_variables
can have the right variables set to use it. The following variables have been removed.repo_pkg_cache_enabled
repo_pkg_cache_port
repo_pkg_cache_bind
repo_pkg_cache_dirname
repo_pkg_cache_dir
repo_pkg_cache_owner
repo_pkg_cache_group
Bug Fixes¶
The
ansible-role-requirements.yml
reference to theceph-ansible
role has been fixed to refer to the currentHEAD
of that role’sstable-3.1
branch. It previously pointed to the pre-release 3.2.0beta1 version.
The quota for security group rules was erroneously set to 100 with the aim to have 100 security group rules per security group instead of to 100*#security group rules. This patch fixes this discrepancy.
18.0.0¶
New Features¶
Support has been added for deploying on Ubuntu 18.04 LTS hosts. The most significant change is a major version increment of LXC from 2.x to 3.x which deprecates some previously used elements of the container configuration file.
It is possible to configure Glance to allow cross origin requests by specifying the allowed origin address using the
glance_cors_allowed_origin
variable. By default, this will be the load balancer address.
Adds support for the horizon octavia-ui dashboard. The dashboard will be automatically enabled if any octavia hosts are defined. If both Neutron LBaaSv2 and Octavia are enabled, two Load Balancer panels will be visible in Horizon.
Deployers can now set the
container_tech
to nspawn when deploying OSA within containers. When making the decision to deploy container types the deployer only needs to define the desiredcontainer_tech
and continue the deployment as normal.
The addition of the
container_tech
option and the inclusion of nspawn support deployers now have the availability to define a desired containerization strategy globally or on specific hosts.
When using the nspawn driver containers will connect to the system bridges using a MACVLAN, more on this type of network setup can be seen here.
When using the nspawn driver container networking is managed by systemd-networkd both on the host and within the container. This gives us a single interface to manage regardless of distro and allows systemd to efficiently manage the resources.
The service setup in keystone for aodh will now be executed through delegation to the
aodh_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.aodh_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for barbican will now be executed through delegation to the
barbican_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.barbican_service_setup_host: "{{ groups['utility_all'][0] }}"
When
venvwithindex=True
andignorerequirements=True
are both specified inrally_git_install_fragments
(as was previously the default), this results in rally being installed from PyPI without any constraints being applied. This results in inconsistent builds from day to day, and can cause build failures for stable implementations due to new library releases. Going forward, we remove therally_git_*
overrides inplaybooks/defaults/repo_packages/openstack_testing.yml
so that the integrated build installs rally from PyPI, but with appropriate constraints applied.
When venvwithindex=True and ignorerequirements=True are both specified in tempest_git_install_fragments (as was previously the default), this results in tempest being installed from PyPI without any constraints being applied. This could result in the version of tempest being installed in the integrated build being different than the version being installed in the independent role tests. Going forward, we remove the tempest_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs tempest from PyPI, but with appropriate constraints applied.
The service setup in keystone for ceilometer will now be executed through delegation to the
ceilometer_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.ceilometer_service_setup_host: "{{ groups['utility_all'][0] }}"
Octavia requires SSL certificates for communication with the amphora. This adds the automatic creation of self signed certificates for this purpose. It uses different certificate authorities for amphora and control plane thus insuring maximum security.
The service setup in keystone for cinder will now be executed through delegation to the
cinder_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.cinder_service_setup_host: "{{ groups['utility_all'][0] }}"
If defined in applicable host or group vars the variable
container_extra_networks
will be merged with the existingcontainer_networks
from the dynamic inventory. This allows a deployer to specify special interfaces which may be unique to an indivdual container. An example use for this feature would be applying known fixed IP addresses to public interfaces on BIND servers for designate.
The option
repo_venv_default_pip_packages
has been added which will allow deployers to insert any packages into a service venv as needed. The option expects a list of strings which are valid python package names as found on PYPI.
The service setup in keystone for designate will now be executed through delegation to the
designate_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.designate_service_setup_host: "{{ groups['utility_all'][0] }}"
The
os_horizon
role now supports distribution of user custom themes. Deployers can use the new keytheme_src_archive
ofhorizon_custom_themes
dictionary to provide absolute path to the archived theme. Only .tar.gz, .tgz, .zip, .tar.bz, .tar.bz2, .tbz, .tbz2 archives are supported. Structure inside archive should be as a standard theme, without any leading folders.
The option
rabbitmq_erlang_version_spec
has been added allowing deployers to set the version of erlang used on a given installation.
Octavia is creating vms, securitygroups, and other things in its project. In most cases the default quotas are not big enough. This will adjust them to (configurable) reasonable values.
The service setup in keystone for glance will now be executed through delegation to the
glance_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.glance_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for gnocchi will now be executed through delegation to the
gnocchi_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.gnocchi_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for heat will now be executed through delegation to the
heat_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.heat_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for horizon will now be executed through delegation to the
horizon_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.horizon_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for ironic will now be executed through delegation to the
ironic_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.ironic_service_setup_host: "{{ groups['utility_all'][0] }}"
The service updates for keystone will now be executed through delegation to the
keystone_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.keystone_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for magnum will now be executed through delegation to the
magnum_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.magnum_service_setup_host: "{{ groups['utility_all'][0] }}"
Instead of downloading images to the magnum API servers, the images will now download to the
magnum_service_setup_host
to the folder set inmagnum_image_path
owned bymagnum_image_path_owner
.
The service setup in keystone for neutron will now be executed through delegation to the
neutron_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.neutron_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for nova will now be executed through delegation to the
nova_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.nova_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for octavia will now be executed through delegation to the
octavia_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.octavia_service_setup_host: "{{ groups['utility_all'][0] }}"
Deployers can now set the
install_method
to eithersource
(default) ordistro
to choose the method for installing OpenStack services on the hosts. This only applies to new deployments. Existing deployments which are source based, cannot be converted to the newdistro
method. For more information, please refer to the Deployment Guide.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
heat_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
cinder_install_method
variable todistro
.–
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
glance_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
aodh_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
designate_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
swift_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
nova_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
ceilometer_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
barbican_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
keystone_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
neutron_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
nova_install_method
variable todistro
.
The openrc role will no longer be executed on all OpenStack service containers/hosts. Instead a single host is designated through the use of the
openstack_service_setup_host
variable. The default islocalhost
(the deployment host). Deployers can opt to change this to the utility container by implementing the following override inuser_variables.yml
.openstack_service_setup_host: "{{ groups['utility_all'][0] }}"
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in trove.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in cinder.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in nova.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ironic.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in barbican.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in heat.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in aodh.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in glance.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ceilometer.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in sahara.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in designate.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in neutron.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in magnum.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in keystone.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in swift.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in octavia.
The service setup in keystone for sahara will now be executed through delegation to the
sahara_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.sahara_service_setup_host: "{{ groups['utility_all'][0] }}"
An option to disable the
machinectl
quota system has been changed. The variablelxc_host_machine_quota_disabled
is a Boolean with a default of false. When this option is set to true it will disable themachinectl
quota system.
The options
lxc_host_machine_qgroup_space_limit
andlxc_host_machine_qgroup_compression_limit
have been added allowing a deployer to set qgroup limits as they see fit. The default value for these options is “none” which is effectively unlimited. These options accept any nominal size value followed by the single letter type, example64G
. These options are only effective when the optionlxc_host_machine_quota_disabled
is set to false.
The service setup in keystone for swift will now be executed through delegation to the
swift_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.swift_service_setup_host: "{{ groups['utility_all'][0] }}"
A new playbook
infra-journal-remote.yml
to ship journals has been added. Physical hosts will now ship the all available systemd journals to the logging infrastructure. The received journals will be split up by host and stored in the /var/log/journal/remote directory. This feature will give deployers greater access/insight into how the cloud is functioning requiring nothing more that the systemd built-ins.
The service setup in keystone for tempest will now be executed through delegation to the
tempest_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.tempest_service_setup_host: "{{ groups['utility_all'][0] }}"
Rather than a hard-coded set of projects and users, tempest can now be configured with a custom list with the variables
tempest_projects
andtempest_users
.
It is now possible to specify a list of tests for tempest to blacklist when executing using the
tempest_test_blacklist
list variable.
The trove service setup in keystone will now be executed through delegation to the
trove_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.trove_service_setup_host: "{{ groups['utility_all'][0] }}"
Known Issues¶
All OSA releases earlier than 17.0.5, 16.0.4, and 15.1.22 will fail to build the rally venv due to the release of the new cmd2-0.9.0 python library. Deployers are encouraged to update to the latest OSA release which pins to an appropriate version which is compatible with python2.
With the release of CentOS 7.5, all pike releases are broken due to a mismatch in version between the libvirt-python library specified by the OpenStack community, and the version provided in CentOS 7.5. As such OSA is unable build the appropriate python library for libvirt. The only recourse for this is to upgrade the environment to the latest queens release.
Upgrade Notes¶
The supported upgrade path from Xenial to Bionic is via re-installation of the host OS across all nodes and redeployment of the required services. The Rocky branch of OSA is intended as the transition point for such upgrades from Xenial to Bionic. At this time there is no support for in-place operating system upgrades (typically via
do-release-upgrade
).
Users should purge the ‘ntp’ package from their hosts if ceph-ansible is enabled. ceph-ansible previously was configured to install ntp by default which conflicts with the OSA ansible-hardening role chrony service.
The variable cinder_iscsi_helper has been replaced by the new variable which is cinder_target_helper due to the fact that iscsi_helper has been deprecated in Cinder.
The key is_ssh_address has been removed from the openstack_user_config.yml and the dynamic inventory. This key was responsible mapping an address to the container which was used for SSH connectivity. Because we’ve created the SSH connectivity plugin, which allows us the ability to connect to remote containers without SSH, this option is no longer useful. To keep the openstack_user_config.yml clean deployers can remove the option however moving forward it no longer has any effect.
The distribution package lookup and data output has been removed from the py_pkgs lookup so that the repo-build use of py_pkgs has reduced output and the lookup is purpose specific for python packages only.
The ping check that happens inside keepalived to make sure that the server that runs it can reach 193.0.14.129 has been removed by default. The functionality can continue to be used if you define the keepalived_ping_address in your user_variables.yml file to 193.0.14.129 or any IP of your choice.
The glance v1 API is now removed upstream and the deployment code is now removed from this glance ansible role. The variable
glance_enable_v1_api
is removed.
Deprecation Notes¶
The variable
aodh_requires_pip_packages
is no longer required and has therefore been removed.
The variable
barbican_requires_pip_packages
is no longer required and has therefore been removed.
The following variables are no longer used and have therefore been removed.
ceilometer_requires_pip_packages
ceilometer_service_name
ceilometer_service_port
ceilometer_service_proto
ceilometer_service_type
ceilometer_service_description
The variable
cinder_requires_pip_packages
is no longer required and has therefore been removed.
The variable
designate_requires_pip_packages
is no longer required and has therefore been removed.
The use of the
apt_package_pinning
role as a meta dependency has been removed from the rabbitmq_server role. While the package pinning role is still used, it will now only be executed when the apt task file is executed.
The
get_gested
filter has been removed, as it is not used by any roles/plays.
The variable
glance_requires_pip_packages
is no longer required and has therefore been removed.
The variable
gnocchi_requires_pip_packages
is no longer required and has therefore been removed.
The variable
heat_requires_pip_packages
is no longer required and has therefore been removed.
The variable
horizon_requires_pip_packages
is no longer required and has therefore been removed.
The variable
ironic_requires_pip_packages
is no longer required and has therefore been removed.
The log path,
/var/log/barbican
is no longer used to capture service logs. All logging for the barbican service will now be sent directly to the systemd journal.
The log path,
/var/log/keystone
is no longer used to capture service logs. All logging for the Keystone service will now be sent directly to the systemd journal.
The log path,
/var/log/congress
is no longer used to capture service logs. All logging for the congress service will now be sent directly to the systemd journal.
The log path,
/var/log/cinder
is no longer used to capture service logs. All logging for the cinder service will now be sent directly to the systemd journal.
The log path,
/var/log/aodh
is no longer used to capture service logs. All logging for the aodh service will now be sent directly to the systemd journal.
The log path,
/var/log/ceilometer
is no longer used to capture service logs. All logging for the ceilometer service will now be sent directly to the systemd journal.
The log path,
/var/log/designate
is no longer used to capture service logs. All logging for the designate service will now be sent directly to the systemd journal.
The variable
keystone_requires_pip_packages
is no longer required and has therefore been removed.
The variable
nova_compute_pip_packages
is no longer used and has been removed.
The variable
magnum_requires_pip_packages
is no longer required and has therefore been removed.
The
molteniron
service is no longer included in the OSA integrated build. Any deployers wishing to use it may still use the playbook and configuration examples from theos_molteniron
role.
The variable
neutron_requires_pip_packages
is no longer required and has therefore been removed.
The variable
nova_requires_pip_packages
is no longer required and has therefore been removed.
The variable
octavia_requires_pip_packages
is no longer required and has therefore been removed.
The variable
octavia_image_downloader
has been removed. The image download now uses the same host designated by theoctavia_service_setup_host
for the image download.
The variable
octavia_ansible_endpoint_type
has been removed. The endpoint used for ansible tasks has been hard set to the ‘admin’ endpoint as is commonly used across all OSA roles.
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - trove_oslomsg_rpc_servers replaces trove_rabbitmq_servers - trove_oslomsg_rpc_port replaces trove_rabbitmq_port - trove_oslomsg_rpc_use_ssl replaces trove_rabbitmq_use_ssl - trove_oslomsg_rpc_userid replaces trove_rabbitmq_userid - trove_oslomsg_rpc_vhost replaces trove_rabbitmq_vhost - added trove_oslomsg_notify_servers - added trove_oslomsg_notify_port - added trove_oslomsg_notify_use_ssl - added trove_oslomsg_notify_userid - added trove_oslomsg_notify_vhost - added trove_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - cinder_oslomsg_rpc_servers replaces cinder_rabbitmq_servers - cinder_oslomsg_rpc_port replaces cinder_rabbitmq_port - cinder_oslomsg_rpc_use_ssl replaces cinder_rabbitmq_use_ssl - cinder_oslomsg_rpc_userid replaces cinder_rabbitmq_userid - cinder_oslomsg_rpc_vhost replaces cinder_rabbitmq_vhost - cinder_oslomsg_notify_servers replaces cinder_rabbitmq_telemetry_servers - cinder_oslomsg_notify_port replaces cinder_rabbitmq_telemetry_port - cinder_oslomsg_notify_use_ssl replaces cinder_rabbitmq_telemetry_use_ssl - cinder_oslomsg_notify_userid replaces cinder_rabbitmq_telemetry_userid - cinder_oslomsg_notify_vhost replaces cinder_rabbitmq_telemetry_vhost
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - nova_oslomsg_rpc_servers replaces nova_rabbitmq_servers - nova_oslomsg_rpc_port replaces nova_rabbitmq_port - nova_oslomsg_rpc_use_ssl replaces nova_rabbitmq_use_ssl - nova_oslomsg_rpc_userid replaces nova_rabbitmq_userid - nova_oslomsg_rpc_vhost replaces nova_rabbitmq_vhost - nova_oslomsg_notify_servers replaces nova_rabbitmq_telemetry_servers - nova_oslomsg_notify_port replaces nova_rabbitmq_telemetry_port - nova_oslomsg_notify_use_ssl replaces nova_rabbitmq_telemetry_use_ssl - nova_oslomsg_notify_userid replaces nova_rabbitmq_telemetry_userid - nova_oslomsg_notify_vhost replaces nova_rabbitmq_telemetry_vhost - nova_oslomsg_notify_password replaces nova_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ironic_oslomsg_rpc_servers replaces ironic_rabbitmq_servers - ironic_oslomsg_rpc_port replaces ironic_rabbitmq_port - ironic_oslomsg_rpc_use_ssl replaces ironic_rabbitmq_use_ssl - ironic_oslomsg_rpc_userid replaces ironic_rabbitmq_userid - ironic_oslomsg_rpc_vhost replaces ironic_rabbitmq_vhost
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - barbican_oslomsg_rpc_servers replaces rabbitmq_servers - barbican_oslomsg_rpc_port replaces rabbitmq_port - barbican_oslomsg_rpc_userid replaces barbican_rabbitmq_userid - barbican_oslomsg_rpc_vhost replaces barbican_rabbitmq_vhost - added barbican_oslomsg_rpc_use_ssl - added barbican_oslomsg_notify_servers - added barbican_oslomsg_notify_port - added barbican_oslomsg_notify_use_ssl - added barbican_oslomsg_notify_userid - added barbican_oslomsg_notify_vhost - added barbican_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - heat_oslomsg_rpc_servers replaces heat_rabbitmq_servers - heat_oslomsg_rpc_port replaces heat_rabbitmq_port - heat_oslomsg_rpc_use_ssl replaces heat_rabbitmq_use_ssl - heat_oslomsg_rpc_userid replaces heat_rabbitmq_userid - heat_oslomsg_rpc_vhost replaces heat_rabbitmq_vhost - heat_oslomsg_rpc_password replaces heat_rabbitmq_password - heat_oslomsg_notify_servers replaces heat_rabbitmq_telemetry_servers - heat_oslomsg_notify_port replaces heat_rabbitmq_telemetry_port - heat_oslomsg_notify_use_ssl replaces heat_rabbitmq_telemetry_use_ssl - heat_oslomsg_notify_userid replaces heat_rabbitmq_telemetry_userid - heat_oslomsg_notify_vhost replaces heat_rabbitmq_telemetry_vhost - heat_oslomsg_notify_password replaces heat_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - aodh_oslomsg_rpc_servers replaces aodh_rabbitmq_servers - aodh_oslomsg_rpc_port replaces aodh_rabbitmq_port - aodh_oslomsg_rpc_use_ssl replaces aodh_rabbitmq_use_ssl - aodh_oslomsg_rpc_userid replaces aodh_rabbitmq_userid - aodh_oslomsg_rpc_vhost replaces aodh_rabbitmq_vhost - aodh_oslomsg_rpc_password replaces aodh_rabbitmq_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - glance_oslomsg_rpc_servers replaces glance_rabbitmq_servers - glance_oslomsg_rpc_port replaces glance_rabbitmq_port - glance_oslomsg_rpc_use_ssl replaces glance_rabbitmq_use_ssl - glance_oslomsg_rpc_userid replaces glance_rabbitmq_userid - glance_oslomsg_rpc_vhost replaces glance_rabbitmq_vhost - glance_oslomsg_notify_servers replaces glance_rabbitmq_telemetry_servers - glance_oslomsg_notify_port replaces glance_rabbitmq_telemetry_port - glance_oslomsg_notify_use_ssl replaces glance_rabbitmq_telemetry_use_ssl - glance_oslomsg_notify_userid replaces glance_rabbitmq_telemetry_userid - glance_oslomsg_notify_vhost replaces glance_rabbitmq_telemetry_vhost - glance_oslomsg_notify_password replaces glance_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ceilometer_oslomsg_rpc_servers replaces rabbitmq_servers - ceilometer_oslomsg_rpc_port replaces rabbitmq_port - ceilometer_oslomsg_rpc_userid replaces ceilometer_rabbitmq_userid - ceilometer_oslomsg_rpc_vhost replaces ceilometer_rabbitmq_vhost - added ceilometer_oslomsg_rpc_use_ssl - added ceilometer_oslomsg_notify_servers - added ceilometer_oslomsg_notify_port - added ceilometer_oslomsg_notify_use_ssl - added ceilometer_oslomsg_notify_userid - added ceilometer_oslomsg_notify_vhost - added ceilometer_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - sahara_oslomsg_rpc_servers replaces sahara_rabbitmq_servers - sahara_oslomsg_rpc_port replaces sahara_rabbitmq_port - sahara_oslomsg_rpc_use_ssl replaces sahara_rabbitmq_use_ssl - sahara_oslomsg_rpc_userid replaces sahara_rabbitmq_userid - sahara_oslomsg_rpc_vhost replaces sahara_rabbitmq_vhost - sahara_oslomsg_notify_servers replaces sahara_rabbitmq_telemetry_servers - sahara_oslomsg_notify_port replaces sahara_rabbitmq_telemetry_port - sahara_oslomsg_notify_use_ssl replaces sahara_rabbitmq_telemetry_use_ssl - sahara_oslomsg_notify_userid replaces sahara_rabbitmq_telemetry_userid - sahara_oslomsg_notify_vhost replaces sahara_rabbitmq_telemetry_vhost
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - designate_oslomsg_rpc_servers replaces designate_rabbitmq_servers - designate_oslomsg_rpc_port replaces designate_rabbitmq_port - designate_oslomsg_rpc_use_ssl replaces designate_rabbitmq_use_ssl - designate_oslomsg_rpc_userid replaces designate_rabbitmq_userid - designate_oslomsg_rpc_vhost replaces designate_rabbitmq_vhost - designate_oslomsg_notify_servers replaces designate_rabbitmq_telemetry_servers - designate_oslomsg_notify_port replaces designate_rabbitmq_telemetry_port - designate_oslomsg_notify_use_ssl replaces designate_rabbitmq_telemetry_use_ssl - designate_oslomsg_notify_userid replaces designate_rabbitmq_telemetry_userid - designate_oslomsg_notify_vhost replaces designate_rabbitmq_telemetry_vhost - designate_oslomsg_notify_password replaces designate_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - neutron_oslomsg_rpc_servers replaces neutron_rabbitmq_servers - neutron_oslomsg_rpc_port replaces neutron_rabbitmq_port - neutron_oslomsg_rpc_use_ssl replaces neutron_rabbitmq_use_ssl - neutron_oslomsg_rpc_userid replaces neutron_rabbitmq_userid - neutron_oslomsg_rpc_vhost replaces neutron_rabbitmq_vhost - neutron_oslomsg_notify_servers replaces neutron_rabbitmq_telemetry_servers - neutron_oslomsg_notify_port replaces neutron_rabbitmq_telemetry_port - neutron_oslomsg_notify_use_ssl replaces neutron_rabbitmq_telemetry_use_ssl - neutron_oslomsg_notify_userid replaces neutron_rabbitmq_telemetry_userid - neutron_oslomsg_notify_vhost replaces neutron_rabbitmq_telemetry_vhost
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - magnum_oslomsg_rpc_servers replaces rabbitmq_servers - magnum_oslomsg_rpc_port replaces rabbitmq_port - magnum_oslomsg_rpc_userid replaces magnum_rabbitmq_userid - magnum_oslomsg_rpc_vhost replaces magnum_rabbitmq_vhost - added magnum_oslomsg_rpc_use_ssl - added magnum_oslomsg_notify_servers - added magnum_oslomsg_notify_port - added magnum_oslomsg_notify_use_ssl - added magnum_oslomsg_notify_userid - added magnum_oslomsg_notify_vhost - added magnum_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - keystone_oslomsg_rpc_servers replaces keystone_rabbitmq_servers - keystone_oslomsg_rpc_port replaces keystone_rabbitmq_port - keystone_oslomsg_rpc_use_ssl replaces keystone_rabbitmq_use_ssl - keystone_oslomsg_rpc_userid replaces keystone_rabbitmq_userid - keystone_oslomsg_rpc_vhost replaces keystone_rabbitmq_vhost - keystone_oslomsg_notify_servers replaces keystone_rabbitmq_telemetry_servers - keystone_oslomsg_notify_port replaces keystone_rabbitmq_telemetry_port - keystone_oslomsg_notify_use_ssl replaces keystone_rabbitmq_telemetry_use_ssl - keystone_oslomsg_notify_userid replaces keystone_rabbitmq_telemetry_userid - keystone_oslomsg_notify_vhost replaces keystone_rabbitmq_telemetry_vhost
The rabbitmq server parameters have been replaced by corresponding oslo.messaging Notify parameters in order to abstract the messaging service from the actual backend server deployment. - swift_oslomsg_notify_servers replaces swift_rabbitmq_telemetry_servers - swift_oslomsg_notify_port replaces swift_rabbitmq_telemetry_port - swift_oslomsg_notify_use_ssl replaces swift_rabbitmq_telemetry_use_ssl - swift_oslomsg_notify_userid replaces swift_rabbitmq_telemetry_userid - swift_oslomsg_notify_vhost replaces swift_rabbitmq_telemetry_vhost - swift_oslomsg_notify_password replaces swift_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - octavia_oslomsg_rpc_servers replaces octavia_rabbitmq_servers - octavia_oslomsg_rpc_port replaces octavia_rabbitmq_port - octavia_oslomsg_rpc_use_ssl replaces octavia_rabbitmq_use_ssl - octavia_oslomsg_rpc_userid replaces octavia_rabbitmq_userid - octavia_oslomsg_rpc_vhost replaces octavia_rabbitmq_vhost - octavia_oslomsg_notify_servers replaces octavia_rabbitmq_telemetry_servers - octavia_oslomsg_notify_port replaces octavia_rabbitmq_telemetry_port - octavia_oslomsg_notify_use_ssl replaces octavia_rabbitmq_telemetry_use_ssl - octavia_oslomsg_notify_userid replaces octavia_rabbitmq_telemetry_userid - octavia_oslomsg_notify_vhost replaces octavia_rabbitmq_telemetry_vhost - octavia_oslomsg_notify_password replaces octavia_rabbitmq_telemetry_password
The repo server’s reverse proxy for pypi has now been removed, leaving only the pypiserver to serve packages already on the repo server. The attempt to reverse proxy upstream pypi turned out to be very unstable with increased complexity for deployers using proxies or offline installs. With this, the variables
repo_nginx_pypi_upstream
andrepo_nginx_proxy_cache_path
have also been removed.
The variable
repo_requires_pip_packages
is no longer required and has therefore been removed.
With the implementation of systemd-journal-remote the rsyslog_client role is no longer run by default. To enable the legacy functionality, the variable rsyslog_client_enabled and rsyslog_server_enabled can be set to true.
The variable
sahara_requires_pip_packages
is no longer required and has therefore been removed.
The variable
swift_requires_pip_packages
is no longer required and has therefore been removed.
The variable
tempest_requires_pip_packages
is no longer required and has therefore been removed.
The variable
tempest_image_downloader
has been removed. The image download now uses the same host designated by thetempest_service_setup_host
for the image download.
The variable
trove_requires_pip_packages
is no longer required and has therefore been removed.
Security Issues¶
It is recommended that the certificate generation is always reviewed by security professionals since algorithms and key-lengths considered secure change all the time.
Avoid setting the quotas too high for your cloud since this can impact the performance of other servcies and lead to a potential Denial-of-Service attack if Loadbalancer quotas are not set properly or RBAC is not properly set up.
Bug Fixes¶
Newer releases of CentOS ship a version of libnss that depends on the existance of /dev/random and /dev/urandom in the operating system in order to run. This causes a problem during the cache preparation process which runs inside chroot that does not contain this, resulting in errors with the following message.
error: Failed to initialize NSS library
This has been resolved by introducing a /dev/random and /dev/urandom inside the chroot-ed environment.
ceph-ansible is no longer configured to install ntp by default, which creates a conflict with OSA’s ansible-hardening role that is used to implement ntp using ‘chrony’.
Fixes bug https://bugs.launchpad.net/openstack-ansible/+bug/1778098 where playbook failed, if
horizon_custom_themes
is specified, and directory for theme is not provided
In order to prevent further issues with a libvirt and python-libvirt version mismatch, KVM-based compute nodes will now use the distribution package python library for libvirt. This should resolve the issue seen with pike builds on CentOS 7.5.
The conditional that determines whether the
sso_callback_template.html
file is deployed for federated deployments has been fixed.
Other Notes¶
When running keystone with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable
keystone_apache_default_log_folder
.
When running aodh with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable
aodh_apache_default_log_folder
.
The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‘!$HOSTNAME’ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.