Stein Series Release Notes¶
19.1.7¶
Security Issues¶
MariaDB version updated to 10.3.25, which covers CVE-2020-2574
Bug Fixes¶
Fixed ceph_client role for distro installs
Since Ubuntu has dropped older base images, which resulted in all previous tags being broken, we’ve switched to downloading always latest base image available. This should guarantee that we retrieve relevant images only.
19.1.2¶
Deprecation Notes¶
Fedora is no longer tested in CI for each commit.
19.1.1¶
New Features¶
Get ceph keyrings from files, if variable``ceph_keyrings_dir`` is defined the keyrings will be extracted from files. All files in the directory must have
.keyring
extention and be named with its correspondingceph_client
name. For example, ifcinder_ceph_client
iscinder
the cinder keyring file must be namedcinder.keyring
. Each file must contain username and the key and nothing more, below an example for cinder.keyring content.[client.cinder] key = XXXXXXXXXXX
19.0.10¶
Known Issues¶
The journald-remote is disabled from execution inside setup-infrastructure until https://github.com/systemd/systemd/issues/2376 has been incorporated in current systemd packages. The playbook can be enabled by setting
journald_remote_enabled
toTrue
Upgrade Notes¶
The journald-remote playbook is disabled from execution inside setup-infrastructure until setting
journald_remote_enabled
is set toTrue
due to https://github.com/systemd/systemd/issues/2376
19.0.7¶
New Features¶
You can set a private repository for epel, you must use
repo_centos_epel_mirror
for the repo URL and if you need to get the GPG key from intranet or a mirror userepo_centos_epel_key
for gpg key location.
Add the possibility to disable openrc v2 download in the dashboard. new var
horizon_show_keystone_v2_rc
can be set toFalse
to remove the entry for the openrc v2 download.
19.0.4¶
New Features¶
Passed –extra-vars flag to the openstack-ansible should have precedence over the user-variables*.yml now.
Security Issues¶
The requirements version has bumped to pull in os-vif 1.15.2, which contains the fix for OSSA-2019-004 / CVE-2019-15753. Operators using linuxbridge networking (the default in openstack-ansible) should update immediately. The fixed package will be installed in the nova venv upon re-deployment of nova using the os-nova-install.yml playbook. Afterwards, verify that the ageing timer on neutron-controlled linux bridges displays as “300.00” raher than “0.00” using
brctl showstp <bridge name>
.
19.0.2¶
New Features¶
Cinder is deployed with Active-Active enabled by default if you are using Ceph as a backend storage.
Known Issues¶
The previous way of using a common backend_host across all deployments was not recommended by the Cinder team and it will cause duplicate messages that cause problems in the environment.
Upgrade Notes¶
It is possible that you may need to use the cinder-manage command to migrate volumes to a specific host. In addition, you will have to remove the old
rbd:volumes
service which will be stale.
19.0.0¶
New Features¶
Support has been added for deploying on Ubuntu 18.04 LTS hosts. The most significant change is a major version increment of LXC from 2.x to 3.x which deprecates some previously used elements of the container configuration file.
It is possible to configure Glance to allow cross origin requests by specifying the allowed origin address using the
glance_cors_allowed_origin
variable. By default, this will be the load balancer address.
The os_horizon role now has support for the horizon manila-ui dashboard. The dashboard may be enabled by setting
horizon_enable_manila_ui
toTrue
in/etc/openstack_deploy/user_variables.yml
.
Experimental support has been added to allow the deployment of the OpenStack Masakari service when hosts are present in the host group
masakari-infra_hosts
.
Adding support for Mistral to be built as part of the repo build process.
Adding the
os-mistral-install.yml
file to deploy mistral to hosts tagged with hostgroupmistral_all
This role now optionally enables your compute nodes’ KVM kernel module nested virtualization capabilities, by setting nova_nested_virt_enabled to true. Depending on your distribution and libvirt version, you might need to set additional variables to fully enabled nested virtualization. For details, please see https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#nested-guest-support.
It is now possible to use NFS mountpoints with the role by using the nova_nfs_client variable, which is useful for using NFS for instance data and saves.
The
os_tempest
role now has the ability to install from distribution packages by settingtempest_install_method
todistro
.
The new variable
tempest_workspace
has been introduced to set the location of the tempest workspace.
The default location of the default tempest configuration is now
/etc/tempest/tempest.conf
rather than the previous default of$HOME/.tempest/etc
.
The service setup in keystone for aodh will now be executed through delegation to the
aodh_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.aodh_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for barbican will now be executed through delegation to the
barbican_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.barbican_service_setup_host: "{{ groups['utility_all'][0] }}"
Add the launchpad and bugzilla keys in tempest_test_blacklist ansible variable. Developers must have a way to trackdown why a test was inserted in the skiplist, and one of the ways is through bugs. This feature add the information regarding it in the list of skipped tests on os_tempest
The blazar dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_blazar_ui: True
The service setup in keystone for ceilometer will now be executed through delegation to the
ceilometer_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.ceilometer_service_setup_host: "{{ groups['utility_all'][0] }}"
It is now possible to modify the NTP server options in chrony using
security_ntp_server_options
.
Chrony got a new configuration option to synchronize the system clock back to the RTC using the
security_ntp_sync_rtc
variable. Disabled by default.
The service setup in keystone for cinder will now be executed through delegation to the
cinder_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.cinder_service_setup_host: "{{ groups['utility_all'][0] }}"
The cloudkitty dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_cloudkitty_ui: True
The list of enabled filters for the Cinder scheduler, scheduler_default_filters in cinder.conf, could previously be defined only via an entry in
cinder_cinder_conf_overrides
. You now have the option to instead define a list variable,cinder_scheduler_default_filters
, that defines the enabled filters. This is helpful if you either want to disable one of the filters enabled by default (at the time of writing, these are AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter), or if conversely you want to add a filter that is normally not enabled, such as DifferentBackendFilter or InstanceLocalityFilter.For example, to enable the InstanceLocalityFilter in addition to the normally enabled scheduler filters, use the following variable.
cinder_scheduler_default_filters: - AvailabilityZoneFilter - CapacityFilter - CapabilitiesFilter - InstanceLocalityFilter
The option
repo_venv_default_pip_packages
has been added which will allow deployers to insert any packages into a service venv as needed. The option expects a list of strings which are valid python package names as found on PYPI.
The nova configuration is updated to always specify an LXD storage pool name when ‘nova_virt_type’ is ‘lxd’. The variable ‘lxd_storage_pool’ is defaulted to ‘default’, the LXD default storage pool name. A new variable ‘lxd_init_storage_pool’ is introduced which specifies the underlying storage pool name. ‘lxd_init_storage_pool’ is used by lxd init when setting up the storage pool. If not provided, lxd init will not use this parameter at all. Please see the lxd man page for further information about the storage pool parameter.
The service setup in keystone for designate will now be executed through delegation to the
designate_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.designate_service_setup_host: "{{ groups['utility_all'][0] }}"
Compare dict vars of before and after configuration to determine whether the config keys or values have changed so a configuration file will not be incorrectly marked as changed when only the ordering has changed.
Set diff return variable to a dict of changes applied.
The
os_horizon
role now supports distribution of user custom themes. Deployers can use the new keytheme_src_archive
ofhorizon_custom_themes
dictionary to provide absolute path to the archived theme. Only .tar.gz, .tgz, .zip, .tar.bz, .tar.bz2, .tbz, .tbz2 archives are supported. Structure inside archive should be as a standard theme, without any leading folders.
Python-tempestconf is a tool that generates a tempest.conf file, based only on the credentials from an openstack installation. It uses the discoverable api from openstack to check for services, features, etc.
Add the possibility to use python-tempestconf tool to generate tempest.conf file, rather than use the role template.
Octavia is creating vms, securitygroups, and other things in its project. In most cases the default quotas are not big enough. This will adjust them to (configurable) reasonable values.
Glance containers will now bind mount the default glance cache directory from the host when glance_default_store is set to file and nfs is not in use. With this change, the glance file cache size is no longer restricted to the size of the container file system.
The service setup in keystone for glance will now be executed through delegation to the
glance_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.glance_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for gnocchi will now be executed through delegation to the
gnocchi_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.gnocchi_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for heat will now be executed through delegation to the
heat_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.heat_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for horizon will now be executed through delegation to the
horizon_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.horizon_service_setup_host: "{{ groups['utility_all'][0] }}"
Horizon has, since OSA’s inception, been deployed with HTTPS access enabled, and has had no way to turn it off. Some use-cases may want to access via HTTP instead, so this patch enables the following.
Listen via HTTPS on a load balancer, but via HTTP on the horizon host and have the load balancer forward the correct headers. It will do this by default in the integrated build due to the presence of the load balancer, so the current behaviour is retained.
Enable HTTPS on the horizon host without a load balancer. This is the role’s default behaviour which matches what it always has been.
Disable HTTPS entirely by setting
haproxy_ssl: no
(which will also disable https on haproxy. This setting is inherited by the newhorizon_enable_ssl
variable by default. This is a new option.
The service setup in keystone for ironic will now be executed through delegation to the
ironic_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.ironic_service_setup_host: "{{ groups['utility_all'][0] }}"
The service updates for keystone will now be executed through delegation to the
keystone_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.keystone_service_setup_host: "{{ groups['utility_all'][0] }}"
If Horizon dashboard of OSA installation has a public FQDN, is it now possible to use LetsEncrypt certification service. Certificate will be generated within HAProxy installation and a cron entry to renew the certificate daily will be setup. Note that there is no certificate distribution implementation at this time, so this will only work for a single haproxy-server environment.
The service setup in keystone for magnum will now be executed through delegation to the
magnum_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.magnum_service_setup_host: "{{ groups['utility_all'][0] }}"
Instead of downloading images to the magnum API servers, the images will now download to the
magnum_service_setup_host
to the folder set inmagnum_image_path
owned bymagnum_image_path_owner
.
The ceph_client role will now look for and configure manila services to work with ceph and cephfs.
The masakari dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_masakari_ui: True
The
os_masakari
role now covers the monitors installation and configuration, completing the full service configuration.
It is now possible for deployers to enable or disable the mysqlcheck capability. The Boolean option galera_monitoring_check_enabled has been added which has a default value of true.
It is now possible to change the port used by mysqlcheck. The integer option galera_monitoring_check_port has been added with the default value of 9200.
The Neutron Service Function Chaining Extension (SFC) can optionally be deployed and configured by defining the following service plugins:
flow_classifier
sfc
neutron_plugin_base: - router - metering - flow_classifier - sfc
For more information about SFC in Neutron, refer to the following:
The
provider_networks
library has been updated to support the definition of network interfaces that can automatically be added as ports to OVS provider bridges setup during a deployment. To activate this feature, add thenetwork_interface
key to the respective flat and/or vlan provider network definition inopenstack_user_config.yml
. For more information, refer to the latest Open vSwitch deployment guide.
The service setup in keystone for neutron will now be executed through delegation to the
neutron_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.neutron_service_setup_host: "{{ groups['utility_all'][0] }}"
VPNaaS dashboard is again available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_neutron_vpnaas: True
The override
rabbitmq_memory_high_watermark
can be used to set the maximum size of the erlang Virtual Machine before the garbage collection is triggered. The default is lowered to0.2
, from0.4
as the garbage collection can require 2x of allocated amount during its operation. This can result in a equivalent use of0.4
, resulting in 40% of memory usage, visible to the rabbitMQ container. The original default setting of0.4
can lead to 80% memory allocation of rabbitMQ, potentially leading to a scenario where the underlying Linux kernel is killing the process due to shortage of virtual memory.
A new option has been added allowing deployers to disable any and all containers on a given host. The option no_containers is a boolean which, if undefined, will default to false. This option can be added to any host in the openstack_user_config.yml or via an override in conf.d. When this option is set to true the given host will be treated as a baremetal machine. The new option mirrors the existing environmental option is_metal but allows deployers to target specific hosts instead of entire groups.
log_hosts: infra-1: ip: 172.16.24.2 no_containers: true
You can now set the Libvirt CPU model and feature flags from the appropriate entry under the
nova_virt_types
dictionary variable (normallykvm
).nova_cpu_model
is a string value that sets the CPU model; this value is ignored if you set anynova_cpu_mode
other thancustom
.nova_cpu_model_extra_flags
is a list that allows you to specify extra CPU feature flags not normally passed through withhost-model
, or thecustom
CPU model of your choice.
The service setup in keystone for nova will now be executed through delegation to the
nova_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.nova_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for octavia will now be executed through delegation to the
octavia_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.octavia_service_setup_host: "{{ groups['utility_all'][0] }}"
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
nova_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
neutron_install_method
variable todistro
.
The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the
nova_install_method
variable todistro
.
Deployers can now define a cinder-backend volume type explicitly private or public with option
public
set to true or false.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in trove.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in barbican.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in aodh.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ceilometer.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in designate.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in magnum.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in swift.
Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in octavia.
Added 2 new varibles for all groups - oslomsg_notify_policies and oslomsg_rpc_policies. These variables contain default rabbitmq policies, which will be applied for every rabbitmq vhost. As for now they will enable [HA mode](https://www.rabbitmq.com/ha.html) for all vhosts. If you would like to disable HA mode just set these variables to empty lists inside your user_config.yml
The
container_interface
provider network option is no longer required for Neutron provider network definitions when related agents or OVN controllers are deployed on bare metal.
The service setup in keystone for sahara will now be executed through delegation to the
sahara_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.sahara_service_setup_host: "{{ groups['utility_all'][0] }}"
The service setup in keystone for swift will now be executed through delegation to the
swift_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.swift_service_setup_host: "{{ groups['utility_all'][0] }}"
The tacker dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_tacker_ui: True
The service setup in keystone for tempest will now be executed through delegation to the
tempest_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.tempest_service_setup_host: "{{ groups['utility_all'][0] }}"
Rather than a hard-coded set of projects and users, tempest can now be configured with a custom list with the variables
tempest_projects
andtempest_users
.
It is now possible to specify a list of tests for tempest to blacklist when executing using the
tempest_test_blacklist
list variable.
Allow the default section in an ini file to be specified using the
default_section
variable when calling aconfig_template
task. This defaults toDEFAULT
.
The trove service setup in keystone will now be executed through delegation to the
trove_service_setup_host
which, by default, islocalhost
(the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override inuser_variables.yml
.trove_service_setup_host: "{{ groups['utility_all'][0] }}"
The MariaDB version has been bumped to 10.2
The MariaDB version has been bumped to 10.2
The
galera_server
role now uses mariabackup in order to complete SST operations due to the fact that this is the recommended choice from MariaDB.
The
galera_server
role now ships with the latest MariaDB release of 10.3.13.
The watcher dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_watcher_ui: True
The zun dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_zun_ui: True
Known Issues¶
Due to a change in how backend_host is defined when using Ceph, all the Cinder volumes will restart under the same backend name. This will mean that any volumes which previously were assigned to the host or container that hosted the volume will no longer be managable. The workaround for this is to use the cinder-manage volume update_host command to move those volumes to the new backend host. This known issue will be resolved soon with an upgrade playbook.
Although the
ceph-rgw
playbooks do enable Swift object versioning, support in radosgw is currently limited to settingX-Versions-Location
on a container.X-History-Location
, understood by native Swift, is currently not supported by radosgw (although the feature is pending upstream).
The number of inotify watch instances available is limited system wide via a sysctl setting. It is possible for certain processes, such as pypi-server, or elasticsearch from the ops repo to consume a large number of inotify watches. If the system wide maximum is reached then any process on the host or in any container on the host will be unable to create a new inotify watch. Systemd uses inotify watches, and if there are none available it is unable to restart services. The processes which synchronise the repo server contents between infra nodes also relies on inotify watches. If the repo servers fail to synchronise, or services fail to restart when expected check the the inotify watch limit which is defined in the sysctl value fs.inotify.max_user_watches. Patches have merged to increase these limits, but for existing environments or those which have not upgraded to a recent enough point release may have to apply an increased limit manually.
We are limiting the tarred inventory backups to 15 in addition to changes that only apply backups when the config has changed. These changes are to address an issue where the inventory was corruped with parallel runs on large clusters.
When using the connection plugin’s
container_user
option,ansible_remote_tmp
should be set to a system writable path such as ‘/var/tmp/’.
Upgrade Notes¶
The supported upgrade path from Xenial to Bionic is via re-installation of the host OS across all nodes and redeployment of the required services. The Rocky branch of OSA is intended as the transition point for such upgrades from Xenial to Bionic. At this time there is no support for in-place operating system upgrades (typically via
do-release-upgrade
).
In Stein, Cinder stopped supporting configuring backup drivers without the full class path. This means that you must now use the following values for
cinder_service_backup_driver
.cinder.backup.drivers.swift.SwiftBackupDriver
cinder.backup.drivers.ceph.CephBackupDriver
If you do not make this change, the Cinder backup service will refuse to start properly.
Data structure for
tempest_test_blacklist
has been updated to add launchpad and/or bugzilla linked with the test being skipped.
The
ceph-rgw
playbooks now setrgw_swift_account_in_url = true
and update the corresponding Keystone service catalog entry accordingly. Applications (such as monitoring scripts) that do not rely on service catalog lookup must be updated with the new endpoint URL that includesAUTH_%(tenant_id)s
just like native Swift does — or, alternatively, should be updated to honor the service catalog after all.
The
ceph-rgw
playbooks now setrgw_swift_versioning_enabled = true
, adding support for object versioning for theobject-store
service.
Changed the default NTP server options in
chrony.conf
. Theoffline
option has been removed,minpoll
/maxpoll
have been removed in favour of the upstream defaults, while theiburst
option was added to speed up initial time synchronization.
The variable cinder_iscsi_helper has been replaced by the new variable which is cinder_target_helper due to the fact that iscsi_helper has been deprecated in Cinder.
The data structure for
galera_client_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
galera_client_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
The data structure for
galera_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
galera_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
Glance containers will be rebooted to add the glance cache bind mount if glance_default_store is set to file and nfs is not in use.
The plugin names for the classifier and sfc changed:
networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin => flow_classifier
networking_sfc.services.sfc.plugin.SfcPlugin => sfc
The
provider_networks
library has been updated to support the definition of network interfaces that can automatically be added as ports to OVS provider bridges setup during a deployment. As a result, thenetwork_interface
value applied to theneutron_provider_networks
override inuser_variables.yml
, as described in previous Open vSwitch deployment guides, is no longer effective. If overrides are necessary, usenetwork_interface_mappings
within the provider network override and specify the respective bridge-to-interface mapping (e.g. “br-provider:bond1”). For more information, refer to the latest Open vSwitch deployment guide.
The rabbitMQ high watermark is set to
0.2
rather than0.4
to prevent possible OOM situations, which limits the maximum memory usage by rabbitMQ to 40% rather than 80% of the memory visible to the rabbitMQ container. The overriderabbitmq_memory_high_watermark
can be used to alter the limit.
If your configuration previously set the
libvirt/cpu_model
and/orlibvirt/cpu_model_extra_flags
variables in anova_nova_conf_overrides
dictionary, you should consider moving those tonova_cpu_model
andnova_cpu_model_extra_flags
in the appropriate entry (normallykvm
) in thenova_virt_types
dictionary.
The tasks creating a keystone service user have been removed, along with related variables
keystone_service_user_name
andkeystone_service_password
. This user can be deleted in existing deployments.
The data structure for
rabbitmq_gpg_keys
has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
rabbitmq_gpg_keys
have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
The default queue policy has changed to
^(?!(amq\.)|(.*_fanout_)|(reply_)).*
instead of^(?!amq\.).*
for efficiency. The new HA policy excludes reply queues (these queues have a single consumer and TTL policy), fanout queues (they have the TTL policy) and amq queues (they are auto-delete queues, with a single consumer).
The default Mnesia
dump_log_write_threshold
value has changed to300
instead of100
for efficiency.dump_log_write_threshold
specifies the maximum number of writes allowed to the transaction log before a new dump of the log is performed. Increasing this value can increase the performances during the queues/exchanges/bindings creation/destroying. The values should be between 100 and 1000. More detail [1].[1] http://erlang.org/doc/man/mnesia.html#dump_log_write_threshold
The option rabbitmq_disable_non_tls_listeners has been removed in favor of setting the bind address and port configuration directly using a new option rabbitmq_port_bindings. This new option is a hash allowing for multiple bind addresses and port configurations.
The repo server no longer uses pypiserver, so it has been removed. Along with this, the following variables have also been removed.
repo_pypiserver_port
repo_pypiserver_pip_packages
repo_pypiserver_package_path
repo_pypiserver_bin
repo_pypiserver_working_dir
repo_pypiserver_start_options
repo_pypiserver_init_overrides
The variable
tempest_image_dir_owner
is removed in favour of using default ansible user to create the image directory.
The glance v1 API is now removed upstream and the deployment code is now removed from this glance ansible role. The variable
glance_enable_v1_api
is removed.
The variables
ceilometer_oslomsg_rpc_servers
andceilometer_oslomsg_notify_servers
have been removed in favour of usingceilometer_oslomsg_rpc_host_group
andceilometer_oslomsg_notify_host_group
instead.
Due to the smart-reources implementation, variables, related to custom git path of exact config files were removed. Now all config files are taken from upstream git repo, but overrides and client configs are still supported. The following variables are not supported now: * ceilometer_git_config_lookup_location * ceilometer_data_meters_git_file_path * ceilometer_event_definitions_git_file_path * ceilometer_gnocchi_resources_git_file_path * ceilometer_loadbalancer_v2_meter_definitions_git_file_path * ceilometer_osprofiler_event_definitions_git_file_path * ceilometer_polling_git_file_path If you are maintaining custom ceilometer git repository, you still may use
ceilometer_git_repo
variable, to provide url to your git repository.
Tacker role now uses default systemd_service role. Due to this upstart is not supported anymore. Was added variable tacker_init_config_overrides, with wich deployer may override predifined options. Also variable program_override has no effect now, and tacker_service_names was removed in favor of tacker_service_name.
The data structure for
ceph_gpg_keys
has been changed to be a list of dicts, each of which is passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.
The default values for
ceph_gpg_keys
have been changed for all supported platforms and now use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.
A new value
epel_gpg_keys
can be overridden to use a different GPG key for the EPEL-7 RPM package repo instead of the vendored key used by default.
Deprecation Notes¶
The variable
aodh_requires_pip_packages
is no longer required and has therefore been removed.
The variable
barbican_requires_pip_packages
is no longer required and has therefore been removed.
The following variables are no longer used and have therefore been removed.
ceilometer_requires_pip_packages
ceilometer_service_name
ceilometer_service_port
ceilometer_service_proto
ceilometer_service_type
ceilometer_service_description
In the
ceph_client
role, the only valid values forceph_pkg_source
are nowceph
anddistro
. For Ubuntu, the Ubuntu Cloud Archive apt source is already setup by theopenstack_hosts
role, so there is no need for it to also be setup by theceph_client
role.
The variable
cinder_requires_pip_packages
is no longer required and has therefore been removed.
There was previously an environment variable (
ANSIBLE_ROLE_FETCH_MODE
) to set whether the roles in ansible-role-requirements.yml were fetched using ansible-galaxy or using git, however the default has been git for some time ansible since the use of theceph-ansible
respoitory for ceph deployment, using ansible-galaxy to download the roles does not work properly. This functionality has therefore been removed.
The variable
designate_requires_pip_packages
is no longer required and has therefore been removed.
The compression option in the
galera_server
role has been removed due to the fact that it is not recommended by MariaDB anymore. This means that all the dependencies from Percona such as QPress are no longer necessary.
The following variables have been removed because they are no longer used. *
galera_percona_xtrabackup_repo
*use_percona_upstream
*galera_xtrabackup_compression
*galera_server_percona_distro_packages
The variable
galera_xtrabackup_threads
has been renamed togalera_mariabackup_threads
to reflect the change in the SST provider.
Dragonflow is no longer maintained as an OpenStack project and has therefore been removed from OpenStack-Ansible as a supported ML2 driver for neutron.
The
get_gested
filter has been removed, as it is not used by any roles/plays.
The variable
glance_requires_pip_packages
is no longer required and has therefore been removed.
The variable
gnocchi_requires_pip_packages
is no longer required and has therefore been removed.
The variable
heat_requires_pip_packages
is no longer required and has therefore been removed.
The variable
horizon_requires_pip_packages
is no longer required and has therefore been removed.
The variable
ironic_requires_pip_packages
is no longer required and has therefore been removed.
The log path,
/var/log/barbican
is no longer used to capture service logs. All logging for the barbican service will now be sent directly to the systemd journal.
The log path,
/var/log/keystone
is no longer used to capture service logs. All logging for the Keystone service will now be sent directly to the systemd journal.
The log path,
/var/log/congress
is no longer used to capture service logs. All logging for the congress service will now be sent directly to the systemd journal.
The log path,
/var/log/cinder
is no longer used to capture service logs. All logging for the cinder service will now be sent directly to the systemd journal.
The log path,
/var/log/blazar
is no longer used to capture service logs. All logging for the blazar service will now be sent directly to the systemd journal.
The log path,
/var/log/aodh
is no longer used to capture service logs. All logging for the aodh service will now be sent directly to the systemd journal.
The log path,
/var/log/ceilometer
is no longer used to capture service logs. All logging for the ceilometer service will now be sent directly to the systemd journal.
The log path,
/var/log/designate
is no longer used to capture service logs. All logging for the designate service will now be sent directly to the systemd journal.
The variable
keystone_requires_pip_packages
is no longer required and has therefore been removed.
The following variable name changes have been implemented in order to better reflect their purpose.
lxc_host_machine_quota_disabled
->lxc_host_btrfs_quota_disabled
lxc_host_machine_qgroup_space_limit
->lxc_host_btrfs_qgroup_space_limit
lxc_host_machine_qgroup_compression_limit
->lxc_host_btrfs_qgroup_compression_limit
The variable
magnum_requires_pip_packages
is no longer required and has therefore been removed.
The variable
neutron_requires_pip_packages
is no longer required and has therefore been removed.
The variable
nova_requires_pip_packages
is no longer required and has therefore been removed.
The variable
octavia_requires_pip_packages
is no longer required and has therefore been removed.
The variable
octavia_image_downloader
has been removed. The image download now uses the same host designated by theoctavia_service_setup_host
for the image download.
The variable
octavia_ansible_endpoint_type
has been removed. The endpoint used for ansible tasks has been hard set to the ‘admin’ endpoint as is commonly used across all OSA roles.
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - trove_oslomsg_rpc_servers replaces trove_rabbitmq_servers - trove_oslomsg_rpc_port replaces trove_rabbitmq_port - trove_oslomsg_rpc_use_ssl replaces trove_rabbitmq_use_ssl - trove_oslomsg_rpc_userid replaces trove_rabbitmq_userid - trove_oslomsg_rpc_vhost replaces trove_rabbitmq_vhost - added trove_oslomsg_notify_servers - added trove_oslomsg_notify_port - added trove_oslomsg_notify_use_ssl - added trove_oslomsg_notify_userid - added trove_oslomsg_notify_vhost - added trove_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - barbican_oslomsg_rpc_servers replaces rabbitmq_servers - barbican_oslomsg_rpc_port replaces rabbitmq_port - barbican_oslomsg_rpc_userid replaces barbican_rabbitmq_userid - barbican_oslomsg_rpc_vhost replaces barbican_rabbitmq_vhost - added barbican_oslomsg_rpc_use_ssl - added barbican_oslomsg_notify_servers - added barbican_oslomsg_notify_port - added barbican_oslomsg_notify_use_ssl - added barbican_oslomsg_notify_userid - added barbican_oslomsg_notify_vhost - added barbican_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - aodh_oslomsg_rpc_servers replaces aodh_rabbitmq_servers - aodh_oslomsg_rpc_port replaces aodh_rabbitmq_port - aodh_oslomsg_rpc_use_ssl replaces aodh_rabbitmq_use_ssl - aodh_oslomsg_rpc_userid replaces aodh_rabbitmq_userid - aodh_oslomsg_rpc_vhost replaces aodh_rabbitmq_vhost - aodh_oslomsg_rpc_password replaces aodh_rabbitmq_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ceilometer_oslomsg_rpc_servers replaces rabbitmq_servers - ceilometer_oslomsg_rpc_port replaces rabbitmq_port - ceilometer_oslomsg_rpc_userid replaces ceilometer_rabbitmq_userid - ceilometer_oslomsg_rpc_vhost replaces ceilometer_rabbitmq_vhost - added ceilometer_oslomsg_rpc_use_ssl - added ceilometer_oslomsg_notify_servers - added ceilometer_oslomsg_notify_port - added ceilometer_oslomsg_notify_use_ssl - added ceilometer_oslomsg_notify_userid - added ceilometer_oslomsg_notify_vhost - added ceilometer_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - designate_oslomsg_rpc_servers replaces designate_rabbitmq_servers - designate_oslomsg_rpc_port replaces designate_rabbitmq_port - designate_oslomsg_rpc_use_ssl replaces designate_rabbitmq_use_ssl - designate_oslomsg_rpc_userid replaces designate_rabbitmq_userid - designate_oslomsg_rpc_vhost replaces designate_rabbitmq_vhost - designate_oslomsg_notify_servers replaces designate_rabbitmq_telemetry_servers - designate_oslomsg_notify_port replaces designate_rabbitmq_telemetry_port - designate_oslomsg_notify_use_ssl replaces designate_rabbitmq_telemetry_use_ssl - designate_oslomsg_notify_userid replaces designate_rabbitmq_telemetry_userid - designate_oslomsg_notify_vhost replaces designate_rabbitmq_telemetry_vhost - designate_oslomsg_notify_password replaces designate_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - magnum_oslomsg_rpc_servers replaces rabbitmq_servers - magnum_oslomsg_rpc_port replaces rabbitmq_port - magnum_oslomsg_rpc_userid replaces magnum_rabbitmq_userid - magnum_oslomsg_rpc_vhost replaces magnum_rabbitmq_vhost - added magnum_oslomsg_rpc_use_ssl - added magnum_oslomsg_notify_servers - added magnum_oslomsg_notify_port - added magnum_oslomsg_notify_use_ssl - added magnum_oslomsg_notify_userid - added magnum_oslomsg_notify_vhost - added magnum_oslomsg_notify_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging Notify parameters in order to abstract the messaging service from the actual backend server deployment. - swift_oslomsg_notify_servers replaces swift_rabbitmq_telemetry_servers - swift_oslomsg_notify_port replaces swift_rabbitmq_telemetry_port - swift_oslomsg_notify_use_ssl replaces swift_rabbitmq_telemetry_use_ssl - swift_oslomsg_notify_userid replaces swift_rabbitmq_telemetry_userid - swift_oslomsg_notify_vhost replaces swift_rabbitmq_telemetry_vhost - swift_oslomsg_notify_password replaces swift_rabbitmq_telemetry_password
The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - octavia_oslomsg_rpc_servers replaces octavia_rabbitmq_servers - octavia_oslomsg_rpc_port replaces octavia_rabbitmq_port - octavia_oslomsg_rpc_use_ssl replaces octavia_rabbitmq_use_ssl - octavia_oslomsg_rpc_userid replaces octavia_rabbitmq_userid - octavia_oslomsg_rpc_vhost replaces octavia_rabbitmq_vhost - octavia_oslomsg_notify_servers replaces octavia_rabbitmq_telemetry_servers - octavia_oslomsg_notify_port replaces octavia_rabbitmq_telemetry_port - octavia_oslomsg_notify_use_ssl replaces octavia_rabbitmq_telemetry_use_ssl - octavia_oslomsg_notify_userid replaces octavia_rabbitmq_telemetry_userid - octavia_oslomsg_notify_vhost replaces octavia_rabbitmq_telemetry_vhost - octavia_oslomsg_notify_password replaces octavia_rabbitmq_telemetry_password
The repo server’s reverse proxy for pypi has now been removed, leaving only the pypiserver to serve packages already on the repo server. The attempt to reverse proxy upstream pypi turned out to be very unstable with increased complexity for deployers using proxies or offline installs. With this, the variables
repo_nginx_pypi_upstream
andrepo_nginx_proxy_cache_path
have also been removed.
The package cache on the repo server has been removed. If caching of packages is desired, it should be setup outside of OpenStack-Ansible and the variable
lxc_container_cache_files
(for LXC containers) ornspawn_container_cache_files_from_host
(for nspawn containers) can be used to copy the appropriate host configuration from the host into the containers on creation. Alternatively, environment variables can be set to use the cache in the host /etc/environment file prior to container creation, or thedeployment_environment_variables
can have the right variables set to use it. The following variables have been removed.repo_pkg_cache_enabled
repo_pkg_cache_port
repo_pkg_cache_bind
repo_pkg_cache_dirname
repo_pkg_cache_dir
repo_pkg_cache_owner
repo_pkg_cache_group
The repo build process no longer builds packaged venvs. Instead, the venvs are created on the target hosts as the install process for each service needs to. This opens up the opportunity for roles to be capable of creating multiple venvs, and for any role to create venvs - neither of these options were possible in previous releases.
The following variables therefore have been removed.
repo_build_venv_selective
repo_build_venv_rebuild
repo_build_venv_timeout
repo_build_concurrency
repo_build_venv_build_dir
repo_build_venv_dir
repo_build_venv_pip_install_options
repo_build_venv_command_options
repo_venv_default_pip_packages
The variable
repo_requires_pip_packages
is no longer required and has therefore been removed.
The variable
sahara_requires_pip_packages
is no longer required and has therefore been removed.
The variable
swift_requires_pip_packages
is no longer required and has therefore been removed.
The variable
tempest_requires_pip_packages
is no longer required and has therefore been removed.
The variable
tempest_image_downloader
has been removed. The image download now uses the same host designated by thetempest_service_setup_host
for the image download.
The variable
trove_requires_pip_packages
is no longer required and has therefore been removed.
Security Issues¶
Avoid setting the quotas too high for your cloud since this can impact the performance of other servcies and lead to a potential Denial-of-Service attack if Loadbalancer quotas are not set properly or RBAC is not properly set up.
The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the ssl_protocol variable.
The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the barbican_ssl_protocol variable.
The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the horizon_ssl_protocol variable.
The default TLS verion has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the keystone_ssl_protocol variable.
The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the gnocchi_ssl_protocol variable.
The default TLS version has been set to force-tlsv12. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the haproxy_ssl_bind_options variable.
The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the trove_ssl_protocol variable.
Bug Fixes¶
The
ceph-rgw
playbooks now include theAUTH_%(tenant_id)s
suffix in the Keystoneobject-store
endpoint. This aligns radosgw’s behavior with that of native Swift. It also enables radosgw to support public read ACLs on containers, and temporary URLs on objects, in the same way that Swift does (bug 1800637).
ceilometer-polling services running on compute nodes did not have the polling namespace configured. Because of this they used the default value of running all pollsters from the central and compute namespaces. But the pollsters from the central namespace don’t have to run on every compute node. This is fixed by only running the compute pollsters on compute nodes.
Fixes bug https://bugs.launchpad.net/openstack-ansible/+bug/1778098 where playbook failed, if
horizon_custom_themes
is specified, and directory for theme is not provided
Fixes neutron HA routers, by enabling
neutron-l3-agent
to invoke the required helper script.
The quota for security group rules was erroneously set to 100 with the aim to have 100 security group rules per security group instead of to 100*#security group rules. This patch fixes this discrepancy.
When using LXC containers with a copy-on-write back-end, the
lxc_hosts
role execution would fail due to undefined variables with thenspawn_host_
prefix. This issue has now been fixed.
The RyuBgpDriver is no longer available and replaced by the OsKenBgpDriver of the neutron_dynamic_routing project.
In https://review.openstack.org/582633 an adjustment was made to the
openstack-ansible
wrapper which mistakenly changed the intended behaviour. The wrapper is only meant to include the extra-vars and invoke the inventory ifansible-playbook
was executed from inside theopenstack-ansible
repository clone (typically/opt/openstack-ansible
), but the change made the path irrelevant. This has now been fixed -ansible-playbook
andansible
will only invoke the inventory and include extra vars if it is invoked from inside the git clone path.
With the release of CentOS 7.6, deployments were breaking and becoming very slow when we restart dbus in order to catch some PolicyKit changes. However, those changes were never actaully used so they were happening for no reason. We no longer make any modifications to the systemd-machined configuration and/or PolicyKit to maintain upstream compatibility.
The conditional that determines whether the
sso_callback_template.html
file is deployed for federated deployments has been fixed.
Other Notes¶
The
config_template
action module has now been moved into its own git repository (openstack/ansible-config_template
). This has been done to simplify the ability to use the plugin in other non OpenStack-Ansible projects.
When running keystone with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable
keystone_apache_default_log_folder
.
When running aodh with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable
aodh_apache_default_log_folder
.
Code which added ‘Acquire::http:No-Cache true’ to the host and container apt preferences when http proxy environment variables were set has been removed. This setting is only required when working around issues introduced by badly configured http proxies. In some cases proxies can improperly cache the apt Releases and Packages files leading to package installation errors. If a deployment is behind a badly configured proxy, the deployer can add the necessary apt config fragment as part of host provisioning. OSA will replicate that config into any containers that are created. This setting can be removed from existing deployments if required by manually deleting the file
/etc/apt/apt.conf.d/00apt-no-cache
from all host and containers.