Newton Serie Releasenotes¶
14.2.16-7¶
Known Issues¶
If newton is still in use and a new build is executed, the repo-build process will fail due to a new PBR library release which was issued after newton was declared end-of-life (EOL). This issue will show up if gnocchi is being used in the environment and may also show up as a venv build failure for other services. This issue can be resolved either by upgrading the environment to a recent OpenStack-Ansible Ocata release, or by doing the following.
Fork the
openstack/openstack-ansible-repo_build
git repository.Create a
stable/newton
branch in the fork based on thenewton-eol
tag.Cherry-pick https://review.openstack.org/557757 into the fork on that branch.
Fork the
openstack/openstack-ansible
git repository.Change the
ansible-role-requirements.yml
file to include the URL and git SHA for your patched fork of the repo_build role.
Once this is done, the repo build process should complete as it did before and the updated software can now be deployed.
The os_neutron role uses the same handler name as the etcd role for executing
systemcl daemon-reload
. Given that newton uses Ansible 2.1.x and that version of Ansible cannot process more than one handler by the same name in a single playbook, only one handler ends up being executed. The handler that does get executed is the etcd handler which is skipped unless the deployment is using Calico.This results in any upgrades to the newton software to never end up running the version of neutron software as intended. This can be seen by observing that the running processes are still using the older software venv.
This issue can be resolved either by upgrading the environment to a recent OpenStack-Ansible Ocata release, or by doing the following.
Fork the
openstack/openstack-ansible-os_neutron
git repository.Create a
stable/newton
branch in the fork based on thenewton-eol
tag.Modify the handler named
Reload systemd daemon
and all references to it in the role to something else (e.g.os_neutron Reload systemd daemon
).Fork the
openstack/openstack-ansible
git repository.Change the
ansible-role-requirements.yml
file to include the URL and git SHA for your patched fork of the os_neutron role.
Once this is done, the upgrade process for neutron services will work properly.
In the
lxc_hosts
role execution, we make use of the images produced on a daily basis by images.linuxcontainers.org. Recent changes in the way those images are produced have resulted in changes to the default/etc/resolve.conf
in that default image. As such, when executing the cache preparation it fails. For all newton queens releases the workaround to get past the error is to add the following to the/etc/openstack_deploy/user_variables.yml
file.lxc_cache_prep_pre_commands: "rm -f /etc/resolv.conf || true" lxc_cache_prep_post_commands: "ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf -f"
14.2.16¶
Other Notes¶
The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‚!$HOSTNAME‘ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.
14.2.15¶
New Features¶
The galera cluster now supports cluster health checks over HTTP using port 9200. The new cluster check ensures a node is healthy by running a simple query against the wsrep sync status using monitoring user. This change will provide for a more robust cluster check ensuring we have the most fault tolerant galera cluster possible.
Galera healthcheck has been improved, and relies on an xinetd service. By default, the service is unaccessible (filtered with the no_access directive). You can override the directive by setting any xinetd valid value to
galera_monitoring_allowed_source
.
HAProxy services that use backend nodes that are not in the Ansible inventory can now be specified manually by setting
haproxy_backend_nodes
to a list ofname
andip_addr
settings.
Known Issues¶
LXD 2.0.11 grew a new feature (‚description‘ tag for a container) and this has been added to the stable Ubuntu Xenial 16.04 release. Unfortunately, pylxd 2.0.5 can’t handle this extra attribute and crashes out.
For newton releases 14.2.14 and earlier, we recommend pinning the LXD version for all newton deployments at a version less than 2.0.11. For newton releases 14.2.15 onwards, the newer release of LXD should be usable again.
Related-Bugs:
For all newton releases up to 14.2.14 when executing the os-nova-install.yml playbook the
nova-novncproxy
andnova-spicehtml5proxy
services will fail. The workaround to resolve this issue is to restart the services.cd /opt/rpc-openstack/openstack-ansible/playbooks # start the service again # replace nova-novncproxy with nova-spicehtml5proxy when appropriate ansible nova_console -m service -a 'name=nova-novncproxy state=restarted' # set the appropriate facts to prevent the playbook trying # to reload it again when the playbook is run again ansible nova_console -m ini_file -a 'dest=/etc/ansible/facts.d/openstack_ansible.fact section=nova option=need_service_restart value=False'
This issue has been resolved in the 14.2.15 release.
14.2.12¶
Known Issues¶
If the protocol of either the keystone admin or internal endpoints is ‚https‘ and SSL is being terminated at a load balancer, tasks which verify that services are responsive and perform the initial service setup through through the keystone hosts‘ web server ports may fail.
Set
keystone_mod_wsgi_enabled
tofalse
to deploy Keystone under Uwsgi and allow the web server to be bypassed during these tasks.See Launchpad Bug 1699191 for more details.
14.2.11¶
Bug Fixes¶
The
sysstat
package was installed on all distributions, but it was only configured to run on Ubuntu and OpenSUSE. It would not run on CentOS due to bad SELinux contexts and file permissions on/etc/cron.d/sysstat
. This has been fixed andsysstat
now runs properly on CentOS.
14.2.10¶
New Features¶
A new repository for installing modern erlang from ESL (erlang solutions) has been added giving us the ability to install and support modern stable erlang over numerous operating systems.
The ability to set the RabbitMQ repo URL for both erlang and RabbitMQ itself has been added. This has been done to allow deployers to define the location of a given repo without having to fully redefine the entire set of definitions for a specific repository. The default variables rabbitmq_gpg_keys, rabbitmq_repo_url, and rabbitmq_erlang_repo_url have been created to facilitate this capability.
The default ulimit for RabbitMQ is now 65536. Deployers can still adjust this limit using the
rabbitmq_ulimit
Ansible variable.
Upgrade Notes¶
Changing to the ESL repos has no upgrade impact. The version of erlang provided by ESL is newer than that what is found in the distro repos. Furthermore, a pin has been added to ensure that APT always uses the ESL repos as it’s preferred source which has been done to simply ensure APT is always pointed at ESL.
Security Issues¶
The
net.bridge.bridge-nf-call-*
kernel parameters were set to0
in previous releases to improve performance and it was left up to neutron to adjust these parameters when security groups are applied. This could cause situations where bridge traffic was not sent through iptables and this rendered security groups ineffective. This could allow unexpected ingress and egress traffic within the cloud.These kernel parameters are now set to
1
on all hosts by theopenstack_hosts
role, which ensures that bridge traffic is always sent through iptables.
PermitRootLogin
in the ssh configuration has changed fromyes
towithout-password
. This will only allow ssh to be used to authenticate root via a key.
Bug Fixes¶
Based on documentation from RabbitMQ [ https://www.rabbitmq.com/which-erlang.html ] this change ensures the version of erlang we’re using across distros is consistent and supported by RabbitMQ.
14.2.9¶
New Features¶
Tags have been added to all of the common tags with the prefix „common-“. This has been done to allow a deployer to rapidly run any of the common on a need basis without having to rerun an entire playbook.
Extra headers can be added to Keystone responses by adding items to
keystone_extra_headers
. Example:keystone_extra_headers: - parameter: "Access-Control-Expose-Headers" value: "X-Subject-Token" - parameter: "Access-Control-Allow-Headers" value: "Content-Type, X-Auth-Token" - parameter: "Access-Control-Allow-Origin" value: "*"
Upgrade Notes¶
The openstack-ansible-security role is now retired and the ansible-hardening role replaces it. The ansible-hardening role provides the same functionality and will be the maintained hardening role going forward.
Bug Fixes¶
In Ubuntu the
dnsmasq
package actually includes init scripts and service configuration which conflict with LXC and are best not included. The actual dependent package isdnsmasq-base
. The package list has been adjusted and a task added to remove thednsmasq
package and purge the related configuration files from all LXC hosts.
14.2.8¶
New Features¶
The
os_nova
role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variablenova_all_software_updated
is true. This variable will need to be set by the playbook consuming the role.
Upgrade Notes¶
LXC containers will have their TZ data sync with their physical host machines. This is being done because containers have been assumed to use UTC while a host could be using something else. This causes issues in some services like celiometer and can result in general time differences in logging.
Bug Fixes¶
MariaDB 10.0.32 released on Aug 17 which, when configured to use xtrabackup for the SST, requires percona xtrabackup version 2.3.5 or higher. As xtrabackup is the default SST mechanism in the
galera_server
role, the version used has been updated from 2.2.13 to 2.3.5 for the x86_64 hardware architecture. See the percona release notes for 2.3.2 for more details of what was included in the fix.
14.2.7¶
New Features¶
The
os_cinder
role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variablecinder_all_software_updated
is true. This variable will need to be set by the playbook consuming the role.
The
os-cinder-install.yml
playbook will now execute a rolling upgrade of cinder including database migrations (both schema and online) as per the procedure described in the cinder documentation. When haproxy is used as the load balancer, the backend being changed will be drained before changes are made, then added back to the pool once the changes are complete.
It’s now possible to disable heat stack password field in horizon.
horizon_enable_heatstack_user_pass
variable has been added and default to True.
The
os-neutron-install.yml
playbook will now execute a rolling upgrade of neutron including database migrations (both expand and contract) as per the procedure described in the neutron documentation.
The
os-nova-install.yml
playbook will now execute a rolling upgrade of nova including database migrations as per the procedure described in the nova documentation.
Upgrade Notes¶
The entire repo build process is now idempotent. From now on when the repo build is re-run, it will only fetch updated git repositories and rebuild the wheels/venvs if the requirements have changed, or a new release is being deployed.
The git clone part of the repo build process now only happens when the requirements change. A git reclone can be forced by using the boolean variable
repo_build_git_reclone
.
The python wheel build process now only happens when requirements change. A wheel rebuild may be forced by using the boolean variable
repo_build_wheel_rebuild
.
The python venv build process now only happens when requirements change. A venv rebuild may be forced by using the boolean variable
repo_build_venv_rebuild
.
The repo build process now only has the following tags, providing a clear path for each deliverable. The tag
repo-build-install
completes the installation of required packages. The tagrepo-build-wheels
completes the wheel build process. The tagrepo-build-venvs
completes the venv build process. Finally, the tagrepo-build-index
completes the manifest preparation and indexing of the os-releases and links folders.
14.2.6¶
New Features¶
The
horizon_images_allow_location
variable is added to support theIMAGES_ALLOW_LOCATION
setting in the horizon_local_settings.py file to allow to specify and external location during the image creation.
14.2.5¶
New Features¶
The new option haproxy_backend_arguments can be utilized to add arbitrary options to a HAProxy backend like tcp-check or http-check.
14.2.4¶
Known Issues¶
When executing a deployment which includes the telemetry systems (ceilometer, gnocchi, aodh), the repo build will fail due to the inability for pip to read the constraints properly from the extras section in ceilometer’s setup.cfg. The current workaround for this is to add the following content to
/etc/openstack_deploy/user_variables.yml
.repo_build_upper_constraints_overrides: - gnocchiclient<3.0.0
Bug Fixes¶
The workaround of requiring the addition of gnocchiclient to the
repo_build_upper_constraints_overrides
variable is no longer required. The appropriate constraints have been implemented as a global pin.
Upstream is now depending on version 2.1.0 of ldappool.
14.2.3¶
New Features¶
New variables have been added to allow a deployer to customize a aodh systemd unit file to their liking.
The task dropping the aodh systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_aodh
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theaodh_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.
The task dropping the ceilometer systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_ceilometer
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theceilometer_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a cinder systemd unit file to their liking.
The task dropping the cinder systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
New variables have been added to allow a deployer to customize a glance systemd unit file to their liking.
The task dropping the glance systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_glance
role, the systemd unitRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using theglance_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a gnocchi systemd unit file to their liking.
The task dropping the gnocchi systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_gnocchi
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thegnocchi_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a heat systemd unit file to their liking.
The task dropping the heat systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_heat
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theheat_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a ironic systemd unit file to their liking.
The task dropping the ironic systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_ironic
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theironic_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a keystone systemd unit file to their liking.
The task dropping the keystone systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_keystone
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thekeystone_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Removed dependency for
cinder_backends_rbd_inuse
in nova.conf when settingrbd_user
andrbd_secret_uuid
variables. Cinder delivers all necessary values via RPC when attaching the volume, so those variables are only necessary for ephemeral disks stored in Ceph. These variables are required to be set up on cinder-volume side under backend section.
New variables have been added to allow a deployer to customize a magnum systemd unit file to their liking.
The task dropping the magnum systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_magnum
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using themagnum_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a neutron systemd unit file to their liking.
The task dropping the neutron systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_neutron
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theneutron_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a nova systemd unit file to their liking.
The task dropping the nova systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_nova
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thenova_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a swift systemd unit file to their liking.
The task dropping the swift systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_swift
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theswift_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Upgrade Notes¶
For the
os_aodh
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theaodh_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_ceilometer
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theceilometer_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_glance
role, the systemd unitRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using theglance_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_gnocchi
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thegnocchi_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_heat
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theheat_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_ironic
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theironic_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_keystone
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thekeystone_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_magnum
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using themagnum_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_neutron
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theneutron_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_nova
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thenova_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_swift
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theswift_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
14.2.2¶
New Features¶
Allows SSL connection to Galera with SSL support.
galera_use_ssl
option has to be set totrue
, in this case self-signed CA cert or user-provided CA cert will be delivered to the container/host.
Implements SSL connection ability to MySQL.
galera_use_ssl
option has to be set totrue
(default), in this case playbooks create self-signed SSL bundle and sets up MySQL configs to use it or distributes user-provided bundle throughout Galera nodes.
Bug Fixes¶
Nova features that use libguestfs (libvirt password/key injection) now work on compute hosts running Ubuntu. When Nova is deployed to Ubuntu compute hosts and either
nova_libvirt_inject_key
ornova_libvirt_inject_password
are set to True, then kernels stored in /boot/vmlinuz-* will be made readable to nova user. See launchpad bug 1507915.
14.2.1¶
New Features¶
Add support for the cinder v3 api. This is enabled by default, but can be disabled by setting the
cinder_enable_v3_api
variable tofalse
.
For the
os_cinder
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thecinder_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Haproxy-server role allows to set up tunable parameters. For doing that it is necessary to set up a dictionary of options in the config files, mentioning those which have to be changed (defaults for the remaining ones are programmed in the template). Also „maxconn“ global option made to be tunable.
Add support for
neutron
as anenabled_network_interface
.
The
ironic_neutron_provisioning_network_name
andironic_neutron_cleaning_network_name
variable can be set to the name of the neutron network to use for provisioning and cleaning. The ansible tasks will determine the appropriate UUID for that network. Alternatively,ironic_neutron_provisioning_network_uuid
orironic_neutron_cleaning_network
can be used to directly specify the UUID of the networks. If bothironic_neutron_provisioning_network_name
andironic_neutron_provisioning_network_uuid
are specified, the specified UUID will be used. If only the provisioning network is specified, the cleaning network will default to the same network.
Upgrade Notes¶
During the keepalived role upgrade the keepalived process will restart and introduce a brief service disruption.
Deprecation Notes¶
The variables
cinder_sigkill_timeout
andcinder_restart_wait
have been deprecated and will be removed in Pike.
Critical Issues¶
A bug that caused the Keystone credential keys to be lost when the playbook is run during a rebuild of the first Keystone container has been fixed. Please see launchpad bug 1667960 for more details.
Other Notes¶
The keepalived role was updated, and now includes an optional way to configure vrrp scripts timeouts. See also: VRRP timeout PR on keepalived role.
14.2.0¶
New Features¶
The
galera_client
role will default to using thegalera_repo_url
URL if the value for it is set. This simplifies using an alternative mirror for the MariaDB server and client as only one variable needs to be set to cover them both.
Add
get_networks
command to the neutron library. This will return network information for all networks, and fail if the specifiednet_name
network is not present. If nonet_name
is specified network information will for all networks will be returned without performing a check on an existingnet_name
network.
The default behaviour of
ensure_endpoint
in the keystone module has changed to update an existing endpoint, if one exists that matches the service name, type, region and interface. This ensures that no duplicate service entries can exist per region.
The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
The deployer can now define an environment variable
GROUP_VARS_PATH
with the folders of its choice (separated by the colon sign) to define an user space group_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default group vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in GROUP_VARS_PATH wins)
The deployer can now define an environment variable
HOST_VARS_PATH
with the folders of its choice (separated by the colon sign) to define an user space host_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default host vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in HOST_VARS_PATH wins)
Upgrade Notes¶
The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
Deprecation Notes¶
The variables
galera_client_apt_repo_url
andgalera_client_yum_repo_url
are deprecated in favour of the common variablegalera_client_repo_url
.
The
update
state for theensure_endpoint
method of thekeystone
module is now deprecated, and will be removed in the Queens cycle. Setting state topresent
will achieve the same result.
14.1.1¶
New Features¶
The new provider network attribute
sriov_host_interfaces
is added to support SR-IOV network mappings inside Neutron. The provider_network adds new items network_sriov_mappings and network_sriov_mappings_list to the provider_networks dictionary. Multiple interfaces can be defined by comma separation.
Neutron SR-IOV can now be optionally deployed and configured. For details about the what the service is and what it provides, see the SR-IOV Installation Guide for more information.
Added new variable
tempest_volume_backend_names
and updated templates/tempest.conf.j2 to pointbackend_names
at this variable
Known Issues¶
There is currently an Ansible bug in regards to
HOSTNAME
. If the host.bashrc
holds a var namedHOSTNAME
, the container where thelxc_container
module attaches will inherit this var and potentially set the wrong$HOSTNAME
. See the Ansible fix which will be released in Ansible version 2.3.
Upgrade Notes¶
Gnocchi service endpoint variables were not named correctly. Renamed variables to be consistent with other roles.
Deprecation Notes¶
Removed
tempest_volume_backend1_name
andtempest_volume_backend1_name
sincebackend1_name
andbackend2_name
were removed from tempest in commit 27905cc (merged 26/04/2016)
14.1.0¶
New Features¶
It’s now possible to change the behavior of
DISALLOW_IFRAME_EMBED
by defining the variablehorizon_disallow_iframe_embed
in the user variables.
Bug Fixes¶
Metal hosts were being inserted into the
lxc_hosts
group, even if they had no containers (Bug 1660996). This is now corrected for newly configured hosts. In addition, any hosts that did not belong inlxc_hosts
will be removed on the next inventory run or playbook call.
Other Notes¶
From now on, external repo management (in use for RDO/UCA for example) will be done inside the pip-install role, not in the repo_build role.
Ubuntu Cloud Archive (UCA) was installed by default on repo, nova, neutron nodes, but not on the other nodes. From now on, we are using UCA everywhere to avoid dependency issues (like having virtualenv build with incompatible versions of python-cryptography). The same reasoning applies to CentOS and RDO packages.
14.0.8¶
New Features¶
The security-hardening playbook hosts target can now be filtered using the
security_host_group
var.
Upgrade Notes¶
The global override
cinder_nfs_client
is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.
Deprecation Notes¶
The global override
cinder_nfs_client
is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.
Bug Fixes¶
Systems using systemd (like Ubuntu Xenial) were incorrectly limited to a low amount of open files. This was causing issues when restarting galera. A deployer can still define the maximum number of open files with the variable
galera_file_limits
(Defaults to 65536).
Other Notes¶
The limits.conf file for galera servers will now be deployed under
/etc/security/limits.d/99-limits.conf
. This is being done to ensure our changes do not clobber existing settings within the system’s default/etc/security/limits.conf
file when the file is templated.
14.0.7¶
New Features¶
It is now possible to customise the location of the configuration file source for the All-In-One (AIO) bootstrap process using the
bootstrap_host_aio_config_path
variable.
It is now possible to customise the location of the scripts used in the All-In-One (AIO) boostrap process using the
bootstrap_host_aio_script_path
variable.
It is now possible to customise the name of the
user_variables.yml
file created by the All-In-One (AIO) bootstrap process using thebootstrap_host_user_variables_filename
variable.
It is now possible to customise the name of the
user_secrets.yml
file created by the All-In-One (AIO) bootstrap process using thebootstrap_host_user_secrets_filename
variable.
The filename of the apt source for the ubuntu cloud archive can now be defined with the variable
uca_apt_source_list_filename
.
The filename of the apt source for the ubuntu cloud archive used in ceph client can now be defined by giving a filename in the uca part of the dict
ceph_apt_repos
.
The filename of the apt source for the ubuntu cloud archive can now be defined with the variable
uca_apt_source_list_filename
.
The filename of the apt/yum source can now be defined with the variable
mariadb_repo_filename
.
The filename of the apt source can now be defined with the variable
filename
inside the dictsgalera_repo
andgalera_percona_xtrabackup_repo
.
The filename of the apt source for the haproxy ppa can now be defined with the
filename
section of the dicthaproxy_repo
.
The filename of the apt source for the ubuntu cloud archive can now be defined with the variable
uca_apt_source_list_filename
.
The
rabbitmq_server
role now supports disabling listeners that do not use TLS. Deployers can override therabbitmq_disable_non_tls_listeners
variable, setting a value ofTrue
if they wish to enable this feature.
Additional volume-types can be created by defining a list named
extra_volume_types
in the desired backend of the variable(s)cinder_backends
You can specify the
galera_package_arch
variable to force a specific architecture when installing percona and qpress packages. This will be automatically calculated based on the architecture of thegalera_server
host. Acceptable values arex86_64
forUbuntu-14.04`, ``Ubuntu-16.04
andRHEL 7
, andppc64le
forUbuntu-16.04
.
Deployers can now define the varible
cinder_qos_specs
to create qos specs and assign those specs to desired cinder volume types.
RabbitMQ Server can now be installed from different methods: a deb file (default), from standard repository package and from external repository. Current behavior is unchanged. Please define
rabbitmq_install_method: distro
to use packages provided by your distribution orrabbitmq_install_method: external_repo
to use packages stored in an external repo. In the caseexternal_repo
is used, the process will install RabbitMQ from the packages hosted by packagecloud.io, as recommended by RabbitMQ.
Known Issues¶
The bootstrap-ansible script may fail with an incompatible requirement when installing OpenStack-Ansible 14.0.6 and before. See https://bugs.launchpad.net/openstack-ansible/+bug/1658948 for more details.
Bug Fixes¶
The percona repository stayed in placed even after a change of the variable
use_percona_upstream
. From now on, the percona repository will not be present unless the deployer decides touse_percona_upstream
. This also fixes a bug of the presence of this apt repository after an upgdrade from Mitaka.
14.0.6¶
New Features¶
Deployers can set
heat_cinder_backups_enabled
to enable or disable the cinder backups feature in heat. If heat has cinder backups enabled, but cinder’s backup service is disabled, newly built stacks will be undeletable.The
heat_cinder_backups_enabled
variable is set tofalse
by default.
Bug Fixes¶
Properly distrubute client keys to nova hypervisors when extra ceph clusters are being deployed.
Properly remove temporary files used to transfer ceph client keys from the deploy host and hypervisors.
14.0.5¶
New Features¶
The installation of
chrony
is still enabled by default, but it is now controlled by thesecurity_enable_chrony
variable.
If the cinder backup service is enabled with
cinder_service_backup_program_enabled: True
, then heat will be configured to use the cinder backup service. Theheat_cinder_backups_enabled
variable will automatically be set toTrue
.
The copy of the
/etc/openstack-release
file is now optional. To disable the copy of the file, setopenstack_distrib_file
tono
.
The location of the
/etc/openstack-release
file placement can now be changed. Set the variableopenstack_distrib_file_path
to place it in a different path.
Swift
versioned_writes
middleware is added to the pipeline by default. Additionally theallow_versioned_writes
settings in the middleware configuration is set toTrue
. This follows the Swift defaults, and enables the use of theX-History-Location
metadata Header.
Upgrade Notes¶
The variables used to produce the
/etc/openstack-release
file have been changed in order to improve consistency in the name spacing according to their purpose.openstack_code_name
–>openstack_distrib_code_name
openstack_release
–>openstack_distrib_release
Note that the value for
openstack_distrib_release
will be taken from the variableopenstack_release
if it is set.
The variable
proxy_env_url
is now used by the apt-cacher-ng jinja2 template to set up an HTTP/HTTPS proxy if needed.
The variable
gnocchi_required_pip_packages
was incorrectly named and has been renamed tognocchi_requires_pip_packages
to match the standard across all roles.
Bug Fixes¶
The
container_cidr
key has been restored back toopenstack_inventory.json
.The fix to remove deleted global override keys mistakenly deleted the
container_cidr
key. This was used by downstream consumers, and cannot be reconstructed with other information inside the inventory file. Regression tests were also added.
The apt-cacher-ng daemon does not use the proxy server specified in environment variables. The proxy server specified in the
proxy_env_url
variable is now set inside the apt-cacher-ng configuration file.
Setup for the PowerVM driver was not properly configuring the system to support RMC configuration for client instances. This fix introduces an interface template for PowerVM that properly supports mixed IPV4/IPV6 deploys and adds documentation for PowerVM RMC. For more information see bug 1643988.
14.0.3¶
Upgrade Notes¶
The variables
tempest_requirements_git_repo
andtempest_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
swift_requirements_git_repo
andswift_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
neutron_requirements_git_repo
andneutron_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
sahara_requirements_git_repo
andsahara_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
nova_requirements_git_repo
andnova_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
nova_lxd_requirements_git_repo
andnova_lxd_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
Bug Fixes¶
SSLv3 is now disabled in the haproxy daemon configuration by default.
Setting the haproxy_bind list on a service is now used as an override to the other VIPs defined in the environment. Previously it was being treated as an append to the other VIPs so there was no path to override the VIP binds for a service. For example, haproxy_bind could be used to bind a service to the internal VIP only.
14.0.2¶
New Features¶
Deployers can now define the override
cinder_rpc_executor_thread_pool_size
which defaults to 64
Deployers can now define the override
cinder_rpc_response_timeout
which defaults to 60
Container boot ordering has been implemented on container types where it would be beneficial. This change ensures that stateful systems running within a container are started ahead of non-stateful systems. While this change has no impact on a running deployment it will assist with faster recovery should any node hosting container go down or simply need to be restarted.
A new task has been added to the „os-lxc-container-setup.yml“ common-tasks file. This new task will allow for additional configurations to be added without having to restart the container. This change is helpful in cases where non-impacting config needs to be added or updated to a running containers.
IPv6 support has been added for the LXC bridge network. This can be configured using
lxc_net6_address
,lxc_net6_netmask
, andlxc_net6_nat
.
Upgrade Notes¶
The variables
horizon_requirements_git_repo
andhorizon_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
ironic_requirements_git_repo
andironic_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
heat_requirements_git_repo
andheat_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
magnum_requirements_git_repo
andmagnum_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
cinder_requirements_git_repo
andcinder_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
gnocchi_requirements_git_repo
andgnocchi_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
glance_requirements_git_repo
andglance_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
keystone_requirements_git_repo
andkeystone_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
aodh_requirements_git_repo
andaodh_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
rally_requirements_git_repo
andrally_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
The variables
ceilometer_requirements_git_repo
andceilometer_requirements_git_install_branch
have been removed in favour of using the URL/path to the upper-constraints file using the variablepip_install_upper_constraints
instead.
Bug Fixes¶
When a task fails while executing a playbook, the default behaviour for Ansible is to fail for that host without executing any notifiers. This can result in configuration changes being executed, but services not being restarted. OpenStack-Ansible now sets
ANSIBLE_FORCE_HANDLERS
toTrue
by default to ensure that all notified handlers attempt to execute before stopping the playbook execution.
The URL of NovaLink uses ‚ftp‘ protocol to provision apt key. It causes apt_key module to fail to retrieve NovaLink gpg public key file. Therefore, change the protocol of URL to ‚http‘. For more information, see bug 1637348.
14.0.0¶
New Features¶
LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the „_“ in the
inventory_hostname
to „-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the optionlxc_container_domain
.
The option
openstack_domain
has been added to the openstack_hosts role. This option is used to setup proper hostname entries for all hosts within a given OpenStack deployment.
The openstack_hosts role will setup an RFC1034/5 hostname and create an alias for all hosts in inventory.
Added new parameter
`cirros_img_disk_format
to support disk formats other than qcow2.
Ceilometer can now use Gnocchi for storage. By default this is disabled. To enable the service, set
ceilometer_gnocchi_enabled: yes
. See the Gnocchi role documentation for more details.
The os_horizon role now has support for the horizon ironic-ui dashboard. The dashboard may be enabled by setting
horizon_enable_ironic_ui
toTrue
in/etc/openstack_deploy/user_variables.yml
.
Adds support for the horizon ironic-ui dashboard. The dashboard will be automatically enabled if any ironic hosts are defined.
The os_horizon role now has support for the horizon magnum-ui dashboard. The dashboard may be enabled by setting
horizon_enable_magnum_ui
toTrue
in/etc/openstack_deploy/user_variables.yml
.
Adds support for the horizon magnum-ui dashboard. The dashboard will be automatically enabled if any magnum hosts are defined.
The
horizon_keystone_admin_roles
variable is added to support theOPENSTACK_KEYSTONE_ADMIN_ROLES
list in the horizon_local_settings.py file.
A new variable has been added to allow a deployer to control the restart of containers via the handler. This new option is
lxc_container_allow_restarts
and has a default ofyes
. If a deployer wishes to disable the auto-restart functionality they can set this value tono
and automatic container restarts that are not absolutely required will be disabled.
Experimental support has been added to allow the deployment of the OpenStack Magnum service when hosts are present in the host group
magnum-infra_hosts
.
Deployers can now blacklist certain Nova extensions by providing a list of such extensions in
horizon_nova_extensions_blacklist
variable, for example:horizon_nova_extensions_blacklist: - "SimpleTenantUsage"
The os_nova role can now deploy the nova-lxd hypervisor. This can be achieved by setting
nova_virt_type
tolxd
on a per-host basis inopenstack_user_config.yml
or on a global basis inuser_variables.yml
.
The os_nova role can now deploy the a custom /etc/libvirt/qemu.conf file by defining
qemu_conf_dict
.
The role now enables auditing during early boot to comply with the requirements in V-38438. By default, the GRUB configuration variables in
/etc/default/grub.d/
will be updated and the activegrub.cfg
will be updated.Deployers can opt-out of the change entirely by setting a variable:
security_enable_audit_during_boot: no
Deployers may opt-in for the change without automatically updating the active
grub.cfg
file by setting the following Ansible variables:security_enable_audit_during_boot: yes security_enable_grub_update: no
A task was added to disable secure ICMP redirects per the requirements in V-38526. This change can cause problems in some environments, so it is disabled by default. Deployers can enable the task (which disables secure ICMP redirects) by setting
security_disable_icmpv4_redirects_secure
toyes
.
A new task was added to disable ICMPv6 redirects per the requirements in V-38548. However, since this change can cause problems in running OpenStack environments, it is disabled by default. Deployers who wish to enable this task (and disable ICMPv6 redirects) should set
security_disable_icmpv6_redirects
toyes
.
AIDE is configured to skip the entire
/var
directory when it does the database initialization and when it performs checks. This reduces disk I/O and allows these jobs to complete faster.This also allows the initialization to become a blocking process and Ansible will wait for the initialization to complete prior to running the next task.
In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the
ANSIBLE_GATHER_SUBSET
variable in the bash environment prior to executing any ansible commands.
A new option has been added to
bootstrap-ansible.sh
to set the role fetch mode. The environment variableANSIBLE_ROLE_FETCH_MODE
sets how role dependencies are resolved.
The auditd rules template included a rule that audited changes to the AppArmor policies, but the SELinux policy changes were not being audited. Any changes to SELinux policies in
/etc/selinux
are now being logged by auditd.
The container cache preparation process now allows
copy-on-write
to be set as thelxc_container_backing_method
when thelxc_container_backing_store
is set tolvm
. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container.
When using copy-on-write backing stores for containers, the base container name may be set using the variable
lxc_container_base_name
which defaults to <linux-distribution>-distribution-release>-<host-cpu-architecture>.
The container cache preparation process now allows
overlayfs
to be set as thelxc_container_backing_store
. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container. Theoverlayfs
backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.
Containers will now bind mount all logs to the physical host machine in the „/openstack/log/{{ inventory_hostname }}“ location. This change will ensure containers using a block backed file system (lvm, zfs, bfrfs) do not run into issues with full file systems due to logging.
Added new variable
tempest_img_name
.
Added new variable
tempest_img_url
. This variable replacescirros_tgz_url
andcirros_img_url
.
Added new variable
tempest_image_file
. This variable replaces the hard-coded value for theimg_file
setting in tempest.conf.j2. This will allow users to specify images other than cirros.
Added new variable
tempest_img_disk_format
. This variable replacescirros_img_disk_format
.
The
rsyslog_server
role now has support for CentOS 7.
Support had been added to install the ceph_client packages and dependencies from Ceph.com, Ubuntu Cloud Archive (UCA), or the operating system’s default repository.
The
ceph_pkg_source
variable controls the install source for the Ceph packages. Valid values include:ceph
: This option installs Ceph from a ceph.com repo. Additional variables to adjust items such as Ceph release and regional download mirror can be found in the variables files.uca
: This option installs Ceph from the Ubuntu Cloud Archive. Additional variables to adjust items such as the OpenStack/Ceph release can be found in the variables files.distro
: This options installs Ceph from the operating system’s default repository and unlike the other options does not attempt to manage package keys or add additional package repositories.
The pip_install role can now configure pip to be locked down to the repository built by OpenStack-Ansible. To enable the lockdown configuration, deployers may set
pip_lock_to_internal_repo
totrue
in/etc/openstack_deploy/user_variables.yml
.
The dynamic_inventory.py file now takes a new argument,
--check
, which will run the inventory build without writing any files to the file system. This is useful for checking to make sure your configuration does not contain known errors prior to running Ansible commands.
The ability to support MultiStrOps has been added to the config_template action plugin. This change updates the parser to use the
set()
type to determine if values within a given key are to be rendered asMultiStrOps
. If an override is used in an INI config file the set type is defined using the standard yaml construct of „?“ as the item marker.# Example Override Entries Section: typical_list_things: - 1 - 2 multistrops_things: ? a ? b
# Example Rendered Config: [Section] typical_list_things = 1,2 multistrops_things = a multistrops_things = b
Although the STIG requires martian packets to be logged, the logging is now disabled by default. The logs can quickly fill up a syslog server or make a physical console unusable.
Deployers that need this logging enabled will need to set the following Ansible variable:
security_sysctl_enable_martian_logging: yes
The
rabbitmq_server
now supports a configurable inventory host group. Deployers can override therabbitmq_host_group
variable if they wish to use the role to create additional RabbitMQ clusters on a custom host group.
The
lxc-container-create
role now consumes the variablelxc_container_bind_mounts
which should contain a list of bind mounts to apply to a newly created container. The appropriate host and container directory will be created and the configuration applied to the container config. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
The
lxc-container-create
role now consumes the variablelxc_container_config_list
which should contain a list of the entries which should be added to the LXC container config file when the container is created. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
The
lxc-container-create
role now consumes the variablelxc_container_commands
which should contain any shell commands that should be executed in a newly created container. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
The container creation process now allows
copy-on-write
to be set as thelxc_container_backing_method
when thelxc_container_backing_store
is set tolvm
. When this is set it will use a snapshot of the base container to build the containers.
The container creation process now allows
overlayfs
to be set as thelxc_container_backing_store
. When this is set it will use a snapshot of the base container to build the containers. Theoverlayfs
backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.
LXC containers will now generate a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set to true. This feature was implemented to resolve issues with dynamic mac addresses in containers generally experienced at scale with network intensive services.
All of the database and database user creates have been removed from the roles into the playbooks. This allows the roles to be tested independently of the deployed database and also allows the roles to be used independently of infrastructure choices made by the integrated OSA project.
Host security hardening is now applied by default using the
openstack-ansible-security
role. Developers can opt out by setting theapply_security_hardening
Ansible variable tofalse
. For more information about the role and the changes it makes, refer to the openstack-ansible-security documentation.
If there are swift hosts in the environment, then the value for
cinder_service_backup_program_enabled
will automatically be set toTrue
. This negates the need to set this variable inuser_variables.yml
, but the value may still be overridden at the deployer discretion.
If there are swift hosts in the environment, then the value for
glance_default_store
will automatically be set toswift
. This negates the need to set this variable inuser_variables.yml
, but the value may still be overridden at the deployer discretion.
The os_nova role can now detect a PowerNV environment and set the virtualization type to ‚kvm‘.
The security role now has tasks that will disable the graphical interface on a server using upstart (Ubuntu 14.04) or systemd (Ubuntu 16.04 and CentOS 7). These changes take effect after a reboot.
Deployers that need a graphical interface will need to set the following Ansible variable:
security_disable_x_windows: no
Yaml files used for ceilometer configuration will now allow a deployer to override a given list. If an override is provided that matches an already defined list in one of the ceilometer default yaml files the entire list will be replaced by the provided override. Previously, a nested lists of lists within the default ceilometer configration files would extend should a deployer provide an override matching an existing pipeline. The extension of the defaults had a high probability to cause undesirable outcomes and was very unpredictable.
An Ansible was added to disable the
rdisc
service on CentOS systems if the service is installed on the system.Deployers can opt-out of this change by setting
security_disable_rdisc
tono
.
Whether ceilometer should be enabled by default for each service is now dynamically determined based on whether there are any ceilometer hosts/containers deployed. This behaviour can still be overridden by toggling
<service>_ceilometer_enabled
in/etc/openstack_deploy/user_variables.yml
.
The
os_neutron
role now determines the default configuration for openvswitch-agenttunnel_types
and the presence or absence oflocal_ip
configuration based on the value ofneutron_ml2_drivers_type
. Deployers may directly control this configuration by overriding theneutron_tunnel_types
variable .
The
os_neutron
role now configures neutron ml2 to load thel2_population
mechanism driver by default based on the value ofneutron_l2_population
. Deployers may directly control the neutron ml2 mechanism drivers list by overriding themechanisms
variable in theneutron_plugins
dictionary.
LBaaSv2 is now enabled by default in all-in-one (AIO) deployments.
The Linux Security Module (LSM) that is appropriate for the Linux distribution in use will be automatically enabled by the security role by default. Deployers can opt out of this change by setting the following Ansible variable:
security_enable_linux_security_module: False
The documentation for STIG V-51337 has more information about how each LSM is enabled along with special notes for SELinux.
An export flag has been added to the
inventory-manage.py
script. This flag allows exporting of host and network information from an OpenStack-Ansible inventory for import into another system, or an alternate view of the existing data. See the developer docs for more details.
Variable
ceph_extra_confs
has been expanded to support retrieving additional ceph.conf and keyrings from multiple ceph clusters automatically.
Additional libvirt ceph client secrets can be defined to support attaching volumes from different ceph clusters.
New variable
ceph_extra_confs
may be defined to support deployment of extra Ceph config files. This is useful for cinder deployments that utilize multiple Ceph clusters as cinder backends.
The
py_pkgs
lookup plugin now has strict ordering for requirement files discovered. These files are used to add additional requirements to the python packages discovered. The order is defined by the constant,REQUIREMENTS_FILE_TYPES
which contains the following entries, ‚test-requirements.txt‘, ‚dev-requirements.txt‘, ‚requirements.txt‘, ‚global-requirements.txt‘, ‚global-requirement-pins.txt‘. The items in this list are arranged from least to most priority.
The
openstack-ansible-galera_server
role will now prevent deployers from changing thegalera_cluster_name
variable on clusters that already have a value set in a running galera cluster. You can set the newgalera_force_change_cluster_name
variable toTrue
to force thegalera_cluster_name
variable to be changed. We recommend setting this by running the galera-install.yml playbook with-e galera_force_change_cluster_name=True
, to avoid changing thegalera_cluster_name
variable unintentionally. Use with caution, changing thegalera_cluster_name
value can cause your cluster to fail, as the nodes won’t join if restarted sequentially.
The repo build process is now able to make use of a pre-staged git cache. If the
/var/www/repo/openstackgit
folder on the repo server is found to contain existing git clones then they will be updated if they do not already contain the required SHA for the build.
The repo build process is now able to synchronize a git cache from the deployment node to the repo server. The git cache path on the deployment node is set using the variable
repo_build_git_cache
. If the deployment node hosts the repo container, then the folder will be symlinked into the bind mount for the repo container. If the deployment node does not host the repo container, then the contents of the folder will be synchronised into the repo container.
The
os_glance
role now supports Ubuntu 16.04 and SystemD.
Gnocchi is available for deploy as a metrics storage service. At this time it does not integrate with Aodh or Ceilometer. To deploy Aodh or Ceilometer to use Gnocchi as a storage / query API, each must be configured appropriately with the use of overrides as described in the configuration guides for each of these services.
CentOS 7 and Ubuntu 16.04 support have been added to the
haproxy
role.
The
haproxy
role installs hatop from source to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variablehaproxy_hatop_download_url
.
Added a boolean var haproxy_service_enabled to the haproxy_service_configs dict to support toggling haproxy endpoints on/off.
Added a new
haproxy_extra_services
var which will allow extra haproxy endpoint additions.
The repo server will now be used as a package manager cache.
The HAProxy role provided by OpenStack-Ansible now terminates SSL using a self-signed certificate by default. While this can be disabled the inclusion of SSL services on all public endpoints as a default will help make deployments more secure without any additional user interaction. More information on SSL and certificate generation can be found here.
The
rabbitmq_server
role now supports configuring HiPE compilation of the RabbitMQ server Erlang code. This configuration option may improve server performance for some workloads and hardware. Deployers can override therabbitmq_hipe_compile
variable, setting a value ofTrue
if they wish to enable this feature.
Horizon now has the ability to set arbitrary configuration options using global option
horizon_config_overrides
in YAML format. The overrides follow the same pattern found within the other OpenStack service overrides. General documentation on overrides can be found here.
The
os_horizon
role now supports configuration of custom themes. Deployers can use the newhorizon_custom_themes
andhorizon_default_theme
variables to configure the dashboard with custom themes and default to a specific theme respectively.
CentOS 7 support has been added to the
galera_server
role.
Implemented support for Ubuntu 16.04 Xenial. percona-xtrabackup packages will be installed from distro repositories, instead of upstream percona repositories due to lack of available packages upstream at the time of implementing this feature.
A task was added that restricts ICMPv4 redirects to meet the requirements of V-38524 in the STIG. This configuration is disabled by default since it could cause issues with LXC in some environments.
Deployers can enable this configuration by setting an Ansible variable:
security_disable_icmpv4_redirects: yes
The audit rules added by the security role now have key fields that make it easier to link the audit log entry to the audit rule that caused it to appear.
pip can be installed via the deployment host using the new variable
pip_offline_install
. This can be useful in environments where the containers lack internet connectivity. Please refer to the limited connectivity installation guide for more information.
The env.d directory included with OpenStack-Ansible is now used as the first source for the environment skeleton, and
/etc/openstack_deploy/env.d
will be used only to override values. Deployers without customizations will no longer need to copy the env.d directory to /etc/openstack_deploy. As a result, the env.d copy operation has been removed from the node bootstrap role.
A new debug flag has been added to
dynamic_inventory.py
. This should make it easier to understand what’s happening with the inventory script, and provide a way to gather output for more detailed bug reports. See the developer docs for more details.
The
ironic
role now supports Ubuntu 16.04 and SystemD.
Experimental support has been added to allow the deployment of the OpenStack Bare Metal Service (Ironic). Details for how to set it up are available in the OpenStack-Ansible Install Guide for Ironic.
To ensure the deployment system remains clean the Ansible execution environment is contained within a virtual environment. The virtual environment is created at „/opt/ansible-runtime“ and the „ansible.*“ CLI commands are linked within /usr/local/bin to ensure there is no interruption in the deployer workflow.
There is a new default configuration for keepalived, supporting more than 2 nodes.
In order to make use of the latest stable keepalived version, the variable
keepalived_use_latest_stable
must be set toTrue
The ability to support login user domain and login project domain has been added to the keystone module.
# Example usage - keystone: command: ensure_user endpoint: "{{ keystone_admin_endpoint }}" login_user: admin login_password: admin login_project_name: admin login_user_domain_name: custom login_project_domain_name: custom user_name: demo password: demo project_name: demo domain_name: custom
The new LBaaS v2 dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_neutron_lbaas: True
The LBaaSv2 service provider configuration can now be adjusted with the
neutron_lbaasv2_service_provider
variable. This allows a deployer to choose to deploy LBaaSv2 with Octavia in a future version.
The config_template action plugin now has a new option to toggle list extension for JSON or YAML formats. The new option is
list_extend
and is a boolean. The default is True which maintains the existing API.
The lxc_hosts role can now make use of a primary and secondary gpg keyserver for gpg validation of the downloaded cache. Setting the servers to use can be done using the
lxc_image_cache_primary_keyserver
andlxc_image_cache_secondary_keyserver
variables.
The
lxc_container_create
role will now build a container based on the distro of the host OS.
The
lxc_container_create
role now supports Ubuntu 14.04, 16.04, and RHEL/CentOS 7
The LXC container creation process now has a configurable delay for the task which waits for the container to start. The variable
lxc_container_ssh_delay
can be set to change the default delay of five seconds.
The
lxc_host
cache prep has been updated to use the LXC download template. This removes the last remaining dependency the project has on the rpc-trusty-container.tgz image.
The
lxc_host
role will build lxc cache using the download template built from images found here. These images are upstream builds from the greater LXC/D community.
The
lxc_host
role introduces support for CentOS 7 and Ubuntu 16.04 container types.
The inventory script will now dynamically populate the
lxc_hosts
group dynamically based on which machines have container affinities defined. This group is not allowed in user-defined configuration.
Neutron HA router capabilities in Horizon will be enabled automatically if the neutron plugin type is ML2 and environment has >=2 L3 agent nodes.
Horizon now has a boolean variable named
horizon_enable_ha_router
to enable Neutron HA router management.
Horizon’s IPv6 support is now enabled by default. This allows users to manage subnets with IPv6 addresses within the Horizon interface. Deployers can disable IPv6 support in Horizon by setting the following variable:
horizon_enable_ipv6: False
Please note: Horizon will still display IPv6 addresses in various panels with IPv6 support disabled. However, it will not allow any direct management of IPv6 configuration.
memcached now logs with multiple levels of verbosity, depending on the user variables. Setting
debug: True
enables maximum verbosity while settingverbose: True
logs with an intermediate level.
The openstack-ansible-memcached_server role includes a new override,
memcached_connections
which is automatically calculated from the number of memcached connection limit plus additional 1k to configure the OS nofile limit. Without proper nofile limit configuration, memcached will crash in order to support higher parallel connection TCP/Memcache counts.
The repo build process is now able to support building and synchronizing artifacts for multiple CPU architectures. Build artifacts are now tagged with the appropriate CPU architecture by default, and synchronization of build artifacts from secondary, architecture-specific repo servers back to the primary repo server is supported.
The repo install process is now able to support building and synchronizing artifacts for multiple CPU architectures. To support multiple architectures, one or more repo servers must be created for each CPU architecture in the deployment. When multiple CPU architectures are detected among the repo servers, the repo-discovery process will automatically assign a repo master to perform the build process for each architecture.
CentOS 7 support has been added to the
galera_client
role.
Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the
neutron_plugin_type
and theneutron_ml2_mechanism_drivers
that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entireneutron_services
dict variable to disable these services.
Neutron BGP dynamic routing plugin can now optionally be deployed and configured. Please see OpenStack Networking Guide: BGP dynamic routing for details about what the service is and what it provides.
The Project Calico Neutron networking plugin is now integrated into the deployment. For setup instructions please see
os_neutron
role documentation.
A conditional has been added to the
_local_ip
settings used in theneutron_local_ip
which removes the hard requirement for an overlay network to be set within a deployment. If no overlay network is set within the deployment thelocal_ip
will be set to the value of „ansible_ssh_host“.
Deployers can now configure tempest public and private networks by setting the following variables, ‚tempest_private_net_provider_type‘ to either vxlan or vlan and ‚tempest_public_net_provider_type‘ to flat or vlan. Depending on what the deployer sets these variables to, they may also need to update other variables accordingly, this mainly involves ‚tempest_public_net_physical_type‘ and ‚tempest_public_net_seg_id‘. Please refer to http://docs.openstack.org/mitaka/networking-guide/intro-basic-networking.html for more neutron networking information.
The Project Calico Neutron networking plugin is now integrated into the
os_neutron
role. This can be activated using the instructions located in the role documentation.
The
os_neutron
role will now default to the OVS firewall driver whenneutron_plugin_type
isml2.ovs
and the host is running Ubuntu 16.04 on PowerVM. To override this default behavior, deployers should defineneutron_ml2_conf_ini_overrides
and ‚neutron_openvswitch_agent_ini_overrides‘ in ‚user_variables.yml‘. Example belowneutron_ml2_conf_ini_overrides: securitygroup: firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver neutron_openvswitch_agent_ini_overrides: securitygroup: firewall_driver: iptables_hybrid
Neutron VPN as a Service (VPNaaS) can now optionally be deployed and configured. Please see the OpenStack Networking Guide for details about the what the service is and what it provides. See the VPNaaS Install Guide for implementation details.
Support for Neutron distributed virtual routing has been added to the
os_neutron
role. This includes the implementation of Networking Guide’s suggested agent configuration. This feature may be activated by settingneutron_plugin_type: ml2.ovs.dvr
in/etc/openstack_deploy/user_variables.yml
.
The horizon next generation instance management panels have been enabled by default. This changes horizon to use the upstream defaults instead of the legacy panels. Documentation can be found here.
The nova SSH public key distribution has been made a lot faster especially when deploying against very large clusters. To support larger clusters the role has moved away from the „authorized_key“ module and is now generating a script to insert keys that may be missing from the authorized keys file. The script is saved on all nova compute nodes and can be found at
/usr/local/bin/openstack-nova-key.sh
. If ever there is a need to reinsert keys or fix issues on a given compute node the script can be executed at any time without directly running the ansible playbooks or roles.
The os_nova role can now detect and support basic deployment of a PowerVM environment. This sets the virtualization type to ‚powervm‘ and installs/updates the PowerVM NovaLink package and nova-powervm driver.
Nova UCA repository support is implemented by default. This will allow the users to benefit from the updated packages for KVM. The
nova_uca_enable
variable controls the install source for the KVM packages. By default this value is set toTrue
to make use of UCA repository. User can set toFalse
to disable.
A new configuration parameter
security_ntp_bind_local_interfaces
was added to the security role to restrict the network interface to which chronyd will listen for NTP requests.
The LXC container creation and modification process now supports online network additions. This ensures a container remains online when additional networks are added to a system.
Open vSwitch driver support has been implemented. This includes the implementation of the appropriate Neutron configuration and package installation. This feature may be activated by setting
neutron_plugin_type: ml2.ovs
in/etc/openstack_deploy/user_variables.yml
.
An opportunistic Ansible execution strategy has been implemented. This allows the Ansible linear strategy to skip tasks with conditionals faster by never queuing the task when the conditional is evaluated to be false.
The Ansible SSH plugin has been modified to support running commands within containers without having to directly ssh into them. The change will detect presence of a container. If a container is found the physical host will be used as the SSH target and commands will be run directly. This will improve system reliability and speed while also opening up the possibility for SSH to be disabled from within the container itself.
Added
horizon_apache_custom_log_format
tunable to the os-horizon role for changing CustomLog format. Default is „combined“.
Added keystone_apache_custom_log_format tunable for changing CustomLog format. Default is „combined“.
Apache MPM tunable support has been added to the os-keystone role in order to allow MPM thread tuning. Default values reflect the current Ubuntu default settings:
keystone_httpd_mpm_backend: event keystone_httpd_mpm_start_servers: 2 keystone_httpd_mpm_min_spare_threads: 25 keystone_httpd_mpm_max_spare_threads: 75 keystone_httpd_mpm_thread_limit: 64 keystone_httpd_mpm_thread_child: 25 keystone_httpd_mpm_max_requests: 150 keystone_httpd_mpm_max_conn_child: 0
Introduced option to deploy Keystone under Uwsgi. A new variable
keystone_mod_wsgi_enabled
is introduced to toggle this behavior. The default istrue
which continues to deploy with mod_wsgi for Apache. The ports used by Uwsgi for socket and http connection for both public and admin Keystone services are configurable (see also thekeystone_uwsgi_ports
dictionary variable). Other Uwsgi configuration can be overridden by using thekeystone_uwsgi_ini_overrides
variable as documented under „Overriding OpenStack configuration defaults“ in the OpenStack-Ansible Install Guide. Federation features should be considered _experimental_ with this configuration at this time.
Introduced option to deploy Keystone behind Nginx. A new variable
keystone_apache_enabled
is introduced to toggle this behavior. The default istrue
which continues to deploy with Apache. Additional configuration can be delivered to Nginx through the use of thekeystone_nginx_extra_conf
list variable. Federation features are not supported with this configuration at this time. Use of this option requireskeystone_mod_wsgi_enabled
to be set tofalse
which will deploy Keystone under Uwsgi.
The
os_cinder
role now supports Ubuntu 16.04.
CentOS7/RHEL support has been added to the os_cinder role.
CentOS7/RHEL support has been added to the os_glance role.
CentOS7/RHEL support has been added to the os_keystone role.
The
os_magnum
role now supports deployment on Ubuntu 16.04 using systemd.
The galera_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
galera_client_package_state
topresent
.
The ceph_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
ceph_client_package_state
topresent
.
The os_ironic role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
ironic_package_state
topresent
.
The os_nova role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
nova_package_state
topresent
.
The memcached_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
memcached_package_state
topresent
.
The os_heat role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
heat_package_state
topresent
.
The rsyslog_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
rsyslog_server_package_state
topresent
.
The pip_install role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
pip_install_package_state
topresent
.
The repo_build role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
repo_build_package_state
topresent
.
The os_rally role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
rally_package_state
topresent
.
The os_glance role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
glance_package_state
topresent
.
The security role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
security_package_state
topresent
.
A new global option to control all package install states has been implemented. The default action for all distribution package installations is to ensure that the latest package is installed. This may be changed to only verify if the package is present by setting
package_state
topresent
.
The os_keystone role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
keystone_package_state
topresent
.
The os_cinder role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
cinder_package_state
topresent
.
The os_gnocchi role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
gnocchi_package_state
topresent
.
The os_magnum role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
magnum_package_state
topresent
.
The rsyslog_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
rsyslog_client_package_state
topresent
.
The os_sahara role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
sahara_package_state
topresent
.
The repo_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
repo_server_package_state
topresent
.
The haproxy_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
haproxy_package_state
topresent
.
The os_aodh role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
aodh_package_state
topresent
.
The openstack_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
openstack_hosts_package_state
topresent
.
The galera_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
galera_server_package_state
topresent
.
The rabbitmq_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
rabbitmq_package_state
topresent
.
The lxc_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
lxc_hosts_package_state
topresent
.
The os_ceilometer role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
ceilometer_package_state
topresent
.
The os_swift role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
swift_package_state
topresent
.
The os_neutron role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
neutron_package_state
topresent
.
The os_horizon role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting
horizon_package_state
topresent
.
The PATH environment variable that is configured on the remote system can now be set using the
openstack_host_environment_path
list variable.
The repo build process now has the ability to store the pip sources within the build archive. This ability is useful when deploying environments that are „multi-architecture“, „multi-distro“, or „multi-interpreter“ where specific pre-build wheels may not be enough to support all of the deployment. To enable the ability to store the python source code within a given release, set the new option
repo_build_store_pip_sources
totrue
.
The repo server now has a Package Cache service for distribution packages. To leverage the cache, deployers will need to configure the package manager on all hosts to use the cache as a proxy. If a deployer would prefer to disable this service, the variable
repo_pkg_cache_enabled
should be set tofalse
.
The
rabbitmq_server
role now supports deployer override of the RabbitMQ policies applied to the cluster. Deployers can override therabbitmq_policies
variable, providing a list of desired policies.
The RabbitMQ Management UI is now available through HAProxy on port 15672. The default userid is
monitoring
. This user can be modified by changing the parameterrabbitmq_monitoring_userid
in the fileuser_variables.yml
. Please note that ACLs have been added to this HAProxy service by default, such that it may only be accessed by common internal clients. Referenceplaybooks/vars/configs/haproxy_config.yml
Added playbook for deploying Rally in the utility containers
Our general config options are now stored in an „/usr/local/bin/openstack-ansible.rc“ file and will be sourced when the „openstack-ansible“ wrapper is invoked. The RC file will read in BASH environment variables and should any Ansible option be set that overlaps with our defaults the provided value will be used.
The LBaaSv2 device driver is now set by the Ansible variable
neutron_lbaasv2_device_driver
. The default is set to use theHaproxyNSDriver
, which allows for agent-based load balancers.
The GPG key checks for package verification in V-38476 are now working for Red Hat Enterprise Linux 7 in addition to CentOS 7. The checks only look for GPG keys from Red Hat and any other GPG keys, such as ones imported from the EPEL repository, are skipped.
CentOS7 support has been added to the
rsyslog_client
role.
The options of application logrotate configuration files are now configurable.
rsyslog_client_log_rotate_options
can be used to provide a list of directives, andrsyslog_client_log_rotate_scripts
can be used to provide a list of postrotate, prerotate, firstaction, or lastaction scripts.
Experimental support has been added to allow the deployment of the Sahara data-processing service. To deploy sahara hosts should be present in the host group
sahara-infra_hosts
.
The Sahara dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:
horizon_enable_sahara_ui: True
Tasks were added to search for any device files without a proper SELinux label on CentOS systems. If any of these device labels are found, the playbook execution will stop with an error message.
The repo build process now selectively clones git repositories based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the git repo for the service will not be cloned. This behaviour can be optionally changed to force all git repositories to be cloned by setting
repo_build_git_selective
tono
.
The repo build process now selectively builds venvs based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the venv will not be built. This behaviour can be optionally changed to force all venvs to be built by setting
repo_build_venv_selective
toyes
.
The repo build process now selectively builds python packages based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the list of python packages for the service will not be built. This behaviour can be optionally changed to force all python packages to be built by setting
repo_build_wheel_selective
tono
.
A new variable is supported in the
neutron_services
dictionary calledservice_conf_path
. This variable enables services to deploy their config templates to paths outside of /etc/neutron by specifying a directory using the new variable.
The ansible-hardening role supports the application of the Red Hat Enterprise Linux 6 STIG configurations to systems running CentOS 7 and Ubuntu 16.04 LTS.
The
fallocate_reserve` option can now be set (in bytes or as a percentage) for swift by using the ``swift_fallocate_reserve
variable in/etc/openstack_deploy/user_variables.yml
. This value is the amount of space to reserve on a disk to prevent a situation where swift is unable to remove objects due to a lack of available disk space to work with. The default value is 1% of the total disk size.
The
openstack-ansible-os_swift
role will now prevent deployers from changing theswift_hash_path_prefix
andswift_hash_path_suffix
variables on clusters that already have a value set in/etc/swift/swift.conf
. You can set the newswift_force_change_hashes
variable toTrue
to force theswift_hash_path_
variables to be changed. We recommend setting this by running the os-swift.yml playbook with-e swift_force_change_hashes=True
, to avoid changing theswift_hash_path_
variables unintentionally. Use with caution, changing theswift_hash_path_
values causes end-user impact.
The
os_swift
role has 3 new variables that will allow a deployer to change the hard, soft and fs.file-max limits. the hard and soft limits are being added to the limits.conf file for the swift system user. The fs.file-max settings are added to storage hosts via kernel tuning. The new options areswift_hard_open_file_limits
with a default of 10240swift_soft_open_file_limits
with a default of 4096swift_max_file_limits
with a default of 24 times the value ofswift_hard_open_file_limits
.
The
pretend_min_part_hours_passed
option can now be passed to swift-ring-builder prior to performing a rebalance. This is set by theswift_pretend_min_part_hours_passed
boolean variable. The default for this variable is False. We recommend setting this by running the os-swift.yml playbook with-e swift_pretend_min_part_hours_passed=True
, to avoid resettingmin_part_hours
unintentionally on every run. Settingswift_pretend_min_part_hours_passed
to True will reset the clock on the last time a rebalance happened, thus circumventing the min_part_hours check. This should only be used with extreme caution. If you run this command and deploy rebalanced rings before a replication pass completes, you may introduce unavailability in your cluster. This has an end-user imapct.
While default python interpreter for swift is cpython, pypy is now an option. This change adds the ability to greatly improve swift performance without the core code modifications. These changes have been implemented using the documentation provided by Intel and Swiftstack. Notes about the performance increase can be seen here.
Change the port for devices in the ring by adjusting the port value for services, hosts, or devices. This will not involve a rebalance of the ring.
Changing the port for a device, or group of devices, carries a brief period of downtime to the swift storage services for those devices. The devices will be unavailable during period between when the storage service restarts after the port update, and the ring updates to match the new port.
Enable rsync module per object server drive by setting the
swift_rsync_module_per_drive
setting toTrue
. Set this to configure rsync and swift to utilise individual configuration per drive. This is required when disabling rsyncs to individual disks. For example, in a disk full scenario.
The
os_swift
role will now include the swift „staticweb“ middleware by default.
The os_swift role now allows the permissions for the log files created by the swift account, container and object servers to be set. The variable is
swift_syslog_log_perms
and is set to0644
by default.
Support added to allow deploying on ppc64le architecture using the Ubuntu distributions.
Support had been added to allow the functional tests to pass when deploying on ppc64le architecture using the Ubuntu distributions.
Support for the deployment of Unbound caching DNS resolvers has been added as an optional replacement for /etc/hosts management across all hosts in the environment. To enable the Unbound DNS containers, add
unbound_hosts
entries to the environment.
The
repo_build
role now provides the ability to override the upper-constraints applied which are sourced from OpenStack and from the global-requirements-pins.txt file. The variablerepo_build_upper_constraints_overrides
can be populated with a list of upper constraints. This list will take the highest precedence in the constraints process, with the exception of the pins set in the git source SHAs.
Known Issues¶
Deployments on ppc64le are limited to Ubuntu 16.04 for the Newton release of OpenStack-Ansible.
The variables
haproxy_keepalived_(internal|external)_cidr
now has a default set to169.254.(2|1).1/24
. This is to prevent Ansible undefined variable warnings. Deployers must set values for these variables for a working haproxy with keepalived environment when using more than one haproxy node.
In the latest stable version of keepalived there is a problem with the priority calculation when a deployer has more than five keepalived nodes. The problem causes the whole keepalived cluster to fail to work. To work around this issue it is recommended that deployers limit the number of keepalived nodes to no more than five or that the priority for each node is set as part of the configuration (cf.
haproxy_keepalived_vars_file
variable).
Paramiko version 2.0 Python requires the Python cryptography library. New system packages must be installed for this library. For OpenStack-Ansible versions <12.0.12, <11.2.15, <13.0.2 the system packages must be installed on the deployment host manually by executing
apt-get install -y build-essential libssl-dev libffi-dev
.
Upgrade Notes¶
LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the „_“ in the
inventory_hostname
to „-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the optionlxc_container_domain
.
A new global variable has been created named
openstack_domain
. This variable has a default value of „openstack.local“.
The
ca-certificates
package has been included in the LXC container build process in order to prevent issues related to trying to connect to public websites which make use of newer certificates than exist in the base CA certificate store.
In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the
ANSIBLE_GATHER_SUBSET
variable in the bash environment prior to executing any ansible commands.
The environment variable
FORKS
is no longer used. The standard Ansible environment variableANSIBLE_FORKS
should be used instead.
The Galera client role now has a dependency on the apt package pinning role.
The variable
security_audit_apparmor_changes
is now renamed tosecurity_audit_mac_changes
and is enabled by default. Settingsecurity_audit_mac_changes
tono
will disable syscall auditing for any changes to AppArmor policies (in Ubuntu) or SELinux policies (in CentOS).
When upgrading deployers will need to ensure they have a backup of all logging from within the container prior to running the playbooks. If the logging node is present within the deployment all logs should already be sync’d with the logging server and no action is required. As a pre-step it’s recommended that deployers clean up logging directories from within containers prior to running the playbooks. After the playbooks have run the bind mount will be in effect at „/var/log“ which will mount over all previous log files and directories.
Due to a new bind mount at „/var/log“ all containers will be restarted. This is a required restart. It is recommended that deployers run the container restarts in serial to not impact production workloads.
The default value of
service_credentials/os_endpoint_type
within ceilometer’s configuration file has been changed to internalURL. This may be overridden through the use of theceilometer_ceilometer_conf_overrides
variable.
The default database collation has changed from utf8_unicode_ci to utf8_general_ci. Existing databases and tables will need to be converted.
The LXC container cache preparation process now copies package repository configuration from the host instead of implementing its own configuration. The following variables are therefore unnecessary and have been removed:
lxc_container_template_main_apt_repo
lxc_container_template_security_apt_repo
lxc_container_template_apt_components
The LXC container cache preparation process now copies DNS resolution configuration from the host instead of implementing its own configuration. The
lxc_cache_resolvers
variable is therefore unnecessary and has been removed.
The MariaDB wait_timeout setting is decreased to 1h to match the SQL Alchemy pool recycle timeout, in order to prevent unnecessary database session buildups.
The variable
repo_server_packages
that defines the list of packages required to install a repo server has been replaced byrepo_server_distro_packages
.
If there are swift hosts in the environment, then the value for
cinder_service_backup_program_enabled
will automatically be set toTrue
. This negates the need to set this variable inuser_variables.yml
, but the value may still be overridden at the deployer discretion.
If there are swift hosts in the environment, then the value for
glance_default_store
will automatically be set toswift
. This negates the need to set this variable inuser_variables.yml
, but the value may still be overridden at the deployer discretion.
The variable
security_sysctl_enable_tcp_syncookies
has replacedsecurity_sysctl_tcp_syncookies
and it is now a boolean instead of an integer. It is still enabled by default, but deployers can disable TCP syncookies by setting the following Ansible variable:security_sysctl_enable_tcp_syncookies: no
The
glance_apt_packages
variable has been renamed toglance_distro_packages
so that it applies to multiple operating systems.
Within the
haproxy
role hatop has been changed from a package installation to a source-based installation. This has been done to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variablehaproxy_hatop_download_url
.
Haproxy has a new backend to support using the repo server nodes as a git server. The new backend is called „repo_git“ and uses port „9418“. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.
Haproxy has a new backend to support using the repo server nodes as a package manager cache. The new backend is called „repo_cache“ and uses port „3142“ and a single active node. All other nodes within the pool are backups and will be promoted if the active node goes down. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.
SSL termination is assumed enabled for all public endpoints by default. If this is not needed it can be disabled by setting the
openstack_external_ssl
option to false and theopenstack_service_publicuri_proto
to http.
If HAProxy is used as the loadbalancer for a deployment it will generate a self-signed certificate by default. If HAProxy is NOT used, an SSL certificate should be installed on the external loadbalancer. The installation of an SSL certificate on an external load balancer is not covered by the deployment tooling.
In previous releases connections to Horizon originally terminated SSL at the Horizon container. While that is still an option, SSL is now assumed to be terminated at the load balancer. If you wish to terminate SSL at the horizon node change the
horizon_external_ssl
option to false.
Public endpoints will need to be updated using the Keystone admin API to support secure endpoints. The Keystone ansible module will not recreate the endpoints automatically. Documentation on the Keystone service catalog can be found here.
Upgrades will not replace entries in the /etc/openstack_deploy/env.d directory, though new versions of OpenStack-Ansible will now use the shipped env.d as a base, which may alter existing deployments.
The variable used to store the mysql password used by the ironic service account has been changed. The following variable:
ironic_galera_password: secrete
has been changed to:
ironic_container_mysql_password: secrete
There is a new default configuration for keepalived. When running the haproxy playbook, the configuration change will cause a keepalived restart unless the deployer has used a custom configuration file. The restart will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.
A new version of keepalived will be installed on the haproxy nodes if the variable
keepalived_use_latest_stable
is set toTrue
and more than one haproxy node is configured. The update of the package will cause keepalived to restart and therefore will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.
Adding a new nova.conf entry, live_migration_uri. This entry will default to a
qemu-ssh://
uri, which uses the ssh keys that have already been distributed between all of the compute hosts.
The
lxc_container_create
role no longer uses the distro specific lxc container create template.
The following variable changes have been made in the
lxc_host
role:lxc_container_template: Removed because the template option is now contained within the operating system specific variable file loaded at runtime.
lxc_container_template_options: This option was renamed to lxc_container_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and old overrides will cause problems.
lxc_container_release: Removed because image is now tied with the host operating system.
lxc_container_user_name: Removed because the default users are no longer created when the cached image is created.
lxc_container_user_password: Removed because the default users are no longer created when the cached image is created.
lxc_container_template_main_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.
lxc_container_template_security_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.
The
lxc_host
role no longer uses the distro specific lxc container create template.
The following variable changes have been made in the
lxc_host
role:lxc_container_user_password: Removed because the default lxc container user is no longer created by the lxc container template.
lxc_container_template_options: This option was renamed to lxc_cache_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and potentially old overrides will cause problems.
lxc_container_base_delete: Removed because the cache will be refreshed upon role execution.
lxc_cache_validate_certs: Removed because the Ansible
get_url
module is no longer used.lxc_container_caches: Removed because the container create process will build a cached image based on the host OS.
LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.
The dynamic_inventory script previously set the provider network attributes
is_container_address
andis_ssh_address
to True for the management network regardless of whether a deployer had them configured this way or not. Now, these attributes must be configured by deployers and the dynamic_inventory script will fail if they are missing or not True.
During upgrades, container and service restarts for the mariadb/galera cluster were being triggered multiple times and causing the cluster to become unstable and often unrecoverable. This situation has been improved immensely, and we now have tight control such that restarts of the galera containers only need to happen once, and are done so in a controlled, predictable and repeatable way.
The memcached log is removed from /var/log/memcached.log and is now stored in the /var/log/memcached folder.
The variable
galera_client_apt_packages
has been replaced bygalera_client_distro_packages
.
Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the
neutron_plugin_type
and theneutron_ml2_mechanism_drivers
that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entireneutron_services
dict variable to disable these services.
Database migration tasks have been added for the dynamic routing neutron plugin.
As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. The default DHCP configuration to advertise an MTU to instances has therefore been removed from the variable
neutron_dhcp_config
.
As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. As such the
neutron_network_device_mtu
variable has been removed and the hard-coded values in the templates foradvertise_mtu
,path_mtu
, andsegment_mtu
have been removed to allow upstream defaults to operate as intended.
The new host group
neutron_openvswitch_agent
has been added to theenv.d/neutron.yml
andenv.d/nova.yml
environment configuration files in order to support the implementation of Open vSwitch. Deployers must ensure that their environment configuration files are updated to include the above group name. Please see the example implementations in env.d/neutron.yml and env.d/nova.yml.
The variable
neutron_agent_mode
has been removed from theos_neutron
role. The appropriate value forl3_agent.ini
is now determined based on theneutron_plugin_type
and host group membership.
The default horizon instance launch panels have been changed to the next generation panels. To enable legacy functionality set the following options accordingly:
horizon_launch_instance_legacy: True horizon_launch_instance_ng: False
A new nova admin endpoint will be registered with the suffix
/v2.1/%(tenant_id)s
. The nova admin endpoint with the suffix/v2/%(tenant_id)s
may be manually removed.
Cleanup tasks are added to remove the nova console git directories
/usr/share/novnc
and/usr/share/spice-html5
, prior to cloning these inside the nova vnc and spice console playbooks. This is necessary to guarantee that local modifications do not break git clone operations, especially during upgrades.
The variable
neutron_linuxbridge
has been removed as it is no longer used.
The variable
neutron_driver_interface
has been removed. The appropriate value forneutron.conf
is now determined based on theneutron_plugin_type
.
The variable
neutron_driver_firewall
has been removed. The appropriate value forneutron.conf
is now determined based on theneutron_plugin_type
.
The variable
neutron_ml2_mechanism_drivers
has been removed. The appropriate value for ml2_conf.ini is now determined based on theneutron_plugin_type
.
Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The
glance_venv_bin
,glance_venv_enabled
,glance_venv_etc_dir
, andglance_non_venv_etc_dir
variables have been removed.
Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The
gnocchi_venv_bin
,gnocchi_venv_enabled
,gnocchi_venv_etc_dir
, andgnocchi_non_venv_etc_dir
variables have been removed.
Installation of heat and its dependent pip packages will now only occur within a Python virtual environment. The
heat_venv_bin
andheat_venv_enabled
variables have been removed.
Installation of horizon and its dependent pip packages will now only occur within a Python virtual environment. The
horizon_venv_bin
,horizon_venv_enabled
,horizon_venv_lib_dir
, andhorizon_non_venv_lib_dir
variables have been removed.
Installation of ironic and its dependent pip packages will now only occur within a Python virtual environment. The
ironic_venv_bin
andironic_venv_enabled
variables have been removed.
Installation of keystone and its dependent pip packages will now only occur within a Python virtual environment. The
keystone_venv_enabled
variable has been removed.
The Neutron L3 Agent configuration for the handle_internal_only_routers variable is removed in order to use the Neutron upstream default setting. The current default for handle_internal_only_routers is True, which does allow Neutron L3 router without external networks attached (as discussed per https://bugs.launchpad.net/neutron/+bug/1572390).
Installation of aodh and its dependent pip packages will now only occur within a Python virtual environment. The
aodh_venv_enabled
andaodh_venv_bin
variables have been removed.
Installation of ceilometer and its dependent pip packages will now only occur within a Python virtual environment. The
ceilometer_venv_enabled
andceilometer_venv_bin
variables have been removed.
Installation of cinder and its dependent pip packages will now only occur within a Python virtual environment. The
cinder_venv_enabled
andcinder_venv_bin
variables have been removed.
Installation of magnum and its dependent pip packages will now only occur within a Python virtual environment. The
magnum_venv_bin
,magnum_venv_enabled
variables have been removed.
Installation of neutron and its dependent pip packages will now only occur within a Python virtual environment. The
neutron_venv_enabled
,neutron_venv_bin
,neutron_non_venv_lib_dir
andneutron_venv_lib_dir
variables have been removed.
Installation of nova and its dependent pip packages will now only occur within a Python virtual environment. The
nova_venv_enabled
,nova_venv_bin
variables have been removed.
Installation of rally and its dependent pip packages will now only occur within a Python virtual environment. The
rally_venv_bin
,rally_venv_enabled
variables have been removed.
Installation of sahara and its dependent pip packages will now only occur within a Python virtual environment. The
sahara_venv_bin
,sahara_venv_enabled
,sahara_venv_etc_dir
, andsahara_non_venv_etc_dir
variables have been removed.
Installation of swift and its dependent pip packages will now only occur within a Python virtual environment. The
swift_venv_enabled
,swift_venv_bin
variables have been removed.
The variable
keystone_apt_packages
has been renamed tokeystone_distro_packages
.
The variable
keystone_idp_apt_packages
has been renamed tokeystone_idp_distro_packages
.
The variable
keystone_sp_apt_packages
has been renamed tokeystone_sp_distro_packages
.
The variable
keystone_developer_apt_packages
has been renamed tokeystone_developer_mode_distro_packages
.
The variable
glance_apt_packages
has been renamed toglance_distro_packages
.
The variable
horizon_apt_packages
has been renamed tohorizon_distro_packages
.
The variable
aodh_apt_packages
has been renamed toaodh_distro_packages
.
The variable
cinder_apt_packages
has been renamed tocinder_distro_packages
.
The variable
cinder_volume_apt_packages
has been renamed tocinder_volume_distro_packages
.
The variable
cinder_lvm_volume_apt_packages
has been renamed tocinder_lvm_volume_distro_packages
.
The variable
ironic_api_apt_packages
has been renamed toironic_api_distro_packages
.
The variable
ironic_conductor_apt_packages
has been renamed toironic_conductor_distro_packages
.
The variable
ironic_conductor_standalone_apt_packages
has been renamed toironic_conductor_standalone_distro_packages
.
The variable
galera_pre_packages
has been renamed togalera_server_required_distro_packages
.
The variable
galera_packages
has been renamed togalera_server_mariadb_distro_packages
.
The variable
haproxy_pre_packages
has been renamed tohaproxy_required_distro_packages
.
The variable
haproxy_packages
has been renamed tohaproxy_distro_packages
.
The variable
memcached_apt_packages
has been renamed tomemcached_distro_packages
.
The variable
neutron_apt_packages
has been renamed toneutron_distro_packages
.
The variable
neutron_lbaas_apt_packages
has been renamed toneutron_lbaas_distro_packages
.
The variable
neutron_vpnaas_apt_packages
has been renamed toneutron_vpnaas_distro_packages
.
The variable
neutron_apt_remove_packages
has been renamed toneutron_remove_distro_packages
.
The variable
heat_apt_packages
has been renamed toheat_distro_packages
.
The variable
ceilometer_apt_packages
has been renamed toceilometer_distro_packages
.
The variable
ceilometer_developer_mode_apt_packages
has been renamed toceilometer_developer_mode_distro_packages
.
The variable
swift_apt_packages
has been renamed toswift_distro_packages
.
The variable
lxc_apt_packages
has been renamed tolxc_hosts_distro_packages
.
The variable
openstack_host_apt_packages
has been renamed toopenstack_host_distro_packages
.
The galera_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
galera_client_package_state
should be set topresent
.
The ceph_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
ceph_client_package_state
should be set topresent
.
The os_ironic role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
ironic_package_state
should be set topresent
.
The os_nova role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
nova_package_state
should be set topresent
.
The memcached_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
memcached_package_state
should be set topresent
.
The os_heat role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
heat_package_state
should be set topresent
.
The rsyslog_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
rsyslog_server_package_state
should be set topresent
.
The pip_install role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
pip_install_package_state
should be set topresent
.
The repo_build role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
repo_build_package_state
should be set topresent
.
The os_rally role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
rally_package_state
should be set topresent
.
The os_glance role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
glance_package_state
should be set topresent
.
The security role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
security_package_state
should be set topresent
.
All roles always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
package_state
should be set topresent
.
The os_keystone role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
keystone_package_state
should be set topresent
.
The os_cinder role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
cinder_package_state
should be set topresent
.
The os_gnocchi role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
gnocchi_package_state
should be set topresent
.
The os_magnum role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
magnum_package_state
should be set topresent
.
The rsyslog_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
rsyslog_client_package_state
should be set topresent
.
The os_sahara role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
sahara_package_state
should be set topresent
.
The repo_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
repo_server_package_state
should be set topresent
.
The haproxy_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
haproxy_package_state
should be set topresent
.
The os_aodh role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
aodh_package_state
should be set topresent
.
The openstack_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
openstack_hosts_package_state
should be set topresent
.
The galera_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
galera_server_package_state
should be set topresent
.
The rabbitmq_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
rabbitmq_package_state
should be set topresent
.
The lxc_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
lxc_hosts_package_state
should be set topresent
.
The os_ceilometer role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
ceilometer_package_state
should be set topresent
.
The os_swift role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
swift_package_state
should be set topresent
.
The os_neutron role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
neutron_package_state
should be set topresent
.
The os_horizon role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option
horizon_package_state
should be set topresent
.
The variable
rsyslog_client_packages
has been replaced byrsyslog_client_distro_packages
.
The variable
rsyslog_server_packages
has been replaced byrsyslog_server_distro_packages
.
The variable
rabbitmq_monitoring_password
has been added touser_secrets.yml
. If this variable does not exist, the RabbitMQ monitoring user will not be created.
All of the discretionary access control (DAC) auditing is now disabled by default. This reduces the amount of logs generated during deployments and minor upgrades. The following variables are now set to
no
:security_audit_DAC_chmod: no security_audit_DAC_chown: no security_audit_DAC_lchown: no security_audit_DAC_fchmod: no security_audit_DAC_fchmodat: no security_audit_DAC_fchown: no security_audit_DAC_fchownat: no security_audit_DAC_fremovexattr: no security_audit_DAC_lremovexattr: no security_audit_DAC_fsetxattr: no security_audit_DAC_lsetxattr: no security_audit_DAC_setxattr: no
The container property
container_release
has been removed as this is automatically set to the same version as the host in the container creation process.
The variable
lxc_container_release
has been removed from thelxc-container-create.yml
playbook as it is no longer consumed by the container creation process.
LBaaSv1 has been removed from the
neutron-lbaas
project in the Newton release and it has been removed from OpenStack-Ansible as well.
The LVM configuration tasks and
lvm.conf
template have been removed from theopenstack_hosts
role since they are no longer needed. All of the LVM configuration is properly handled in theos_cinder
role.
In the
rsyslog_client
role, the variablersyslog_client_repos
has been removed as it is no longer used.
Percona Xtrabackup has been removed from the Galera client role.
The
infra_hosts
andinfra_containers
inventory groups have been removed. No containers or services were assigned to these groups exclusively, and the usage of the groups has been supplanted by theshared-infra_*
andos-infra_*
groups for some time. Deployers who were using the groups should adjust any custom configuration in theenv.d
directory to assign containers and/or services to other groups.
The variable
verbose
has been removed. Deployers should rely on thedebug
var to enable higher levels of memcached logging.
The variable
verbose
has been removed. Deployers should rely on thedebug
var to enable higher levels of logging.
The aodh-api init service is removed since aodh-api is deployed as an apache mod_wsgi service.
The
ceilometer-api
init service is removed sinceceilometer-api
is deployed as an apachemod_wsgi
service.
The database create and user creates have been removed from the
os_heat
role. These tasks have been relocated to the playbooks.
The database create and user creates have been removed from the
os_nova
role. These tasks have been relocated to the playbooks.
The database create and user creates have been removed from the
os_glance
role. These tasks have been relocated to the playbooks.
The database and user creates have been removed from the
os_horizon
role. These tasks have been relocated to the playbooks.
The database create and user creates have been removed from the
os_cinder
role. These tasks have been relocated to the playbooks.
The database create and user creates have been removed from the
os_neutron
role. These tasks have been relocated to the playbooks.
The Neutron HA tool written by AT&T is no longer enabled by default. This tool was providing HA capabilities for networks and routers that were not using the native Neutron L3HA. Because native Neutron L3HA is stable, compatible with the Linux Bridge Agent, and is a better means of enabling HA within a deployment this tool is no longer being setup by default. If legacy L3HA is needed within a deployment the deployer can set neutron_legacy_ha_tool_enabled to true to enable the legacy tooling.
The
repo_build_apt_packages
variable has been renamed.repo_build_distro_packages
should be used instead to override packages required to build Python wheels and venvs.
The
repo_build
role now makes use of Ubuntu Cloud Archive by default. This can be disabled by settingrepo_build_uca_enable
toFalse
.
New overrides are provided to allow for better customization around logfile retention and rate limiting for UDP/TCP sockets.
rsyslog_server_logrotation_window
defaults to 14 daysrsyslog_server_ratelimit_interval
defaults to 0 secondsrsyslog_server_ratelimit_burst
defaults to 10000
The rsyslog.conf is now using v7+ style configuration settings
The
swift_fallocate_reserve
default value has changed from 10737418240 (10GB) to 1% in order to match the OpenStack swift default setting.
A new option swift_pypy_enabled has been added to enable or disable the pypy interpreter for swift. The default is „false“.
A new option swift_pypy_archive has been added to allow a pre-built pypy archive to be downloaded and moved into place to support swift running under pypy. This option is a dictionary and contains the URL and SHA256 as keys.
The
swift_max_rsync_connections
default value has changed from 2 to 4 in order to match the OpenStack swift documented value.
When upgrading a Swift deployment from Mitaka to Newton it should be noted that the enabled middleware list has changed. In Newton the „staticweb“ middleware will be loaded by default. While the change adds a feature it is non-disruptive in upgrades.
All variables in the security role are now prepended with
security_
to avoid collisions with variables in other roles. All deployers who have used the security role in previous releases will need to prepend all security role variables withsecurity_
.For example, a deployer could have disabled direct root ssh logins with the following variable:
ssh_permit_root_login: yes
That variable would become:
security_ssh_permit_root_login: yes
Ceilometer no longer manages alarm storage when Aodh is enabled. It now redirects alarm-related requests to the Aodh API. This is now auto-enabled when Aodh is deployed.
Overrides for ceilometer
aodh_connection_string
will no longer work. Specifying an Aodh connection string in Ceilometer was deprecated within Ceilometer in a prior release so this option has been removed.
Hosts running LXC on Ubuntu 14.04 will now need to enable the „trusty-backports“ repository. The backports repo on Ubuntu 14.04 is now required to ensure LXC is updated to the latest stable version.
The Aodh data migration script should be run to migrate alarm data from MongoDB storage to Galera due to the pending removal of MongoDB support.
Neutron now makes use of Ubuntu Cloud Archive by default. This can be disabled by setting
neutron_uca_enable
toFalse
.
The
utility-all.yml
playbook will no longer distribute the deployment host’s root user’s private ssh key to all utility containers. Deployers who desire this behavior should set theutility_ssh_private_key
variable.
The following variables have been renamed in order to make the variable names neutral for multiple operating systems.
nova_apt_packages -> nova_distro_packages
nova_spice_apt_packages -> nova_spice_distro_packages
nova_novnc_apt_packages -> nova_novnc_distro_packages
nova_compute_kvm_apt_packages -> nova_compute_kvm_distro_packages
Deprecation Notes¶
Removed
cirros_tgz_url
and in most places replaced withtempest_img_url
.
Removed
cirros_img_url
and in most places replaced withtempest_img_url
.
Removed deprecated variable
tempest_compute_image_alt_ssh_user
Removed deprecated variable
tempest_compute_image_ssh_password
Removed deprecated variable
tempest_compute_image_alt_ssh_password
Renamed
cirros_img_disk_format
totempest_img_disk_format
Downloading and unarchiving a .tar.gz has been removed. The related tempest options
ami_img_file
,aki_img_file
, andari_img_file
have been removed from tempest.conf.j2.
The
[boto]
section of tempest.conf.j2 has been removed. These tests have been completely removed from tempest for some time.
The
openstack_host_apt_packages
variable has been deprecated.openstack_host_packages
should be used instead to override packages required to install on all OpenStack hosts.
The
rabbitmq_apt_packages
variable has been deprecated.rabbitmq_dependencies
should be used instead to override additional packages to install alongside rabbitmq-server.
Moved
haproxy_service_configs
var tohaproxy_default_service_configs
so thathaproxy_service_configs
can be modified and added to without overriding the entire default service dict.
galera_package_url changed to percona_package_url for clarity
galera_package_sha256 changed to percona_package_sha256 for clarity
galera_package_path changed to percona_package_path for clarity
galera_package_download_validate_certs changed to percona_package_download_validate_certs for clarity
The
main
function indynamic_inventory.py
now takes named arguments instead of dictionary. This is to support future code changes that will move construction logic into separate files.
Installation of Ansible on the root system, outside of a virtual environment, will no longer be supported.
The variables
`galera_client_package_*`
and`galera_client_apt_percona_xtrabackup_*`
have been removed from the role as Xtrabackup is no longer deployed.
The Neutron HA tool written by AT&T has been deprecated and will be removed in the Ocata release.
Security Issues¶
A sudoers entry has been added to the repo_servers in order to allow the nginx user to stop and start nginx via the init script. This is implemented in order to ensure that the repo sync process can shut off nginx while synchronising data from the master to the slaves.
A self-signed certificate will now be generated by default when HAproxy is used as a load balancer. This certificate is used to terminate the public endpoint for Horizon and all OpenStack API services.
Horizon disables password autocompletion in the browser by default, but deployers can now enable autocompletion by setting
horizon_enable_password_autocomplete
toTrue
.
The admin_token_auth middleware presents a potential security risk and will be removed in a future release of keystone. Its use can be removed by setting the
keystone_keystone_paste_ini_overrides
variable.keystone_keystone_paste_ini_overrides: pipeline:public_api: pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service pipeline:admin_api: pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service pipeline:api_v3: pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
Bug Fixes¶
This role assumes that there is a network named „public|private“ and a subnet named „public|private-subnet“. These names are made configurable by the addition of two sets of variables;
tempest_public_net_name
andtempest_public_subnet_name
for public networks andtempest_private_net_name
andtempest_private_subnet_name
for private networks This addresses bug 1588818
The
/run
directory is excluded from AIDE checks since the files and directories there are only temporary and often change when services start and stop.
AIDE initialization is now always run on subsequent playbook runs when
security_initialize_aide
is set toyes
. The initialization will be skipped if AIDE isn’t installed or if the AIDE database already exists.See bug 1616281 for more details.
Add architecture-specific locations for percona-xtrabackup and qpress, with alternate locations provided for ppc64el due to package inavailability from the current provider.
The role previously did not restart the audit daemon after generating a new rules file. The bug has been fixed and the audit daemon will be restarted after any audit rule changes.
Logging within the container has been bind mounted to the hosts this reslves issue 1588051 <https://bugs.launchpad.net/openstack-ansible/+bug/1588051>_
Removed various deprecated / no longer supported features from tempest.conf.j2. Some variables have been moved to their new sections in the config.
The standard collectstatic and compression process in the os_horizon role now happens after horizon customizations are installed, so that all static resources will be collected and compressed.
LXC containers will now have the ability to use a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set true. This change will assist in resolving a long standing issue where network intensive services, such as neutron and rabbitmq, can enter a confused state for long periods of time and require rolling restarts or internal system resets to recover.
The dictionary-based variables in
defaults/main.yml
are now individual variables. The dictionary-based variables could not be changed as the documentation instructed. Instead it was required to override the entire dictionary. Deployers must use the new variable names to enable or disable the security configuration changes applied by the security role. For more information, see Launchpad Bug 1577944.
Failed access logging is now disabled by default and can be enabled by changing
security_audit_failed_access
toyes
. The rsyslog daemon checks for the existence of log files regularly and this audit rule was triggered very frequently, which led to very large audit logs.
An Ansible task was added to disable the
netconsole
service on CentOS systems if the service is installed on the system.Deployers can opt-out of this change by setting
security_disable_netconsole
tono
.
In order to ensure that the appropriate data is delivered to requesters from the repo servers, the slave repo_server web servers are taken offline during the synchronisation process. This ensures that the right data is always delivered to the requesters through the load balancer.
The pip_install_options variable is now honored during repo building. This variable allows deployers to specify trusted CA certificates by setting the variable to „–cert /etc/ssl/certs/ca-certificates.crt“
The security role previously set the permissions on all audit log files in
/var/log/audit
to0400
, but this prevents the audit daemon from writing to the active log file. This will preventauditd
from starting or restarting cleanly.The task now removes any permissions that are not allowed by the STIG. Any log files that meet or exceed the STIG requirements will not be modified.
When the security role was run in Ansible’s check mode and a tag was provided, the
check_mode
variable was not being set. Any tasks which depend on that variable would fail. This bug is fixed and thecheck_mode
variable is now set properly on every playbook run.
The security role now handles
ssh_config
files that containMatch
stanzas. A marker is added to the configuration file and any new configuration items will be added below that marker. In addition, the configuration file is validated for each change to the ssh configuration file.
Horizon deployments were broken due to an incorrect hostname setting being placed in the apache ServerName configuration. This caused Horizon startup failure any time debug was disabled.
Changed the way we name host containers groups in dynamic_inventory.py for a hostname from hostname_containers to hostname-host_containers to prevent failing in the case where containers groups have the same name as host containers when choosing hostnames inspired from containers group names. This change fixes the following bugs https://bugs.launchpad.net/openstack-ansible/+bug/1512883 and https://bugs.launchpad.net/openstack-ansible/+bug/1528953.
The ability to support login user domain and login project domain has been added to the keystone module. This resolves https://bugs.launchpad.net/openstack-ansible/+bug/1574000
# Example usage - keystone: command: ensure_user endpoint: "{{ keystone_admin_endpoint }}" login_user: admin login_password: admin login_project_name: admin login_user_domain_name: custom login_project_domain_name: custom user_name: demo password: demo project_name: demo domain_name: custom
LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.
When upgrading it is possible for an old
neutron-ns-metadata-proxy
process to remain running in memory. If this happens the old version of the process can cause unexpected issues in a production environment. To fix this a task has been added to the os_neutron role that will execute a process lookup and kill anyneutron-ns-metadata-proxy
processes that are not running the current release tag. Once the old processes are removed the metadata agent running will respawn everything needed within 60 seconds.
Assigning multiple IP addresses to the same host name will now result in an inventory error before running any playbooks.
The nova admin endpoint is now correctly registered as
/v2.1/%(tenant_id)s
instead of/v2/%(tenant_id)s
.
The auditd rules for auditing V-38568 (filesystem mounts) were incorrectly labeled in the auditd logs with the key of
export-V-38568
. They are now correctly logged with the keyfilesystem_mount-V-38568
.
Deleting variable entries from the
global_overrides
dictionary inopenstack_user_config.yml
now properly removes those variables from theopenstack_inventory.json
file. See Bug
The
pip_packages_tmp
variable has been renamedpip_tmp_packages
to avoid unintended processing by the py_pkgs lookup plugin.
The
repo_build
role now correctly applies OpenStack requirements upper-constraints when building Python wheels. This resolves https://bugs.launchpad.net/openstack-ansible/+bug/1605846
The check to validate whether an appropriate ssh public key is available to copy into the container cache has been corrected to check the deployment host, not the LXC host.
Static route information for provider networks now must include the cidr and gateway information. If either key is missing, an error will be raised and the dynamic_inventory.py script will halt before any Ansible action is taken. Previously, if either key was missing, the inventory script would continue silently without adding the static route information to the networks. Note that this check does not validate the CIDR or gateway values, just just that the values are present.
The repo_build play now correctly evaluates environment variables configured in /etc/environment. This enables deployments in an environment with http proxies.
Previously, the
ansible_managed
var was being used to insert a header into theswift.conf
that contained date/time information. This meant that swift.conf across different nodes did not have the same MD5SUM, causingswift-recon --md5
to break. We now insert a piece of static text instead to resolve this issue.
The XFS filesystem is excluded from the daily mlocate crond job in order to conserve disk IO for large IOPS bursts due to updatedb/mlocate file indexing.
The
/var/lib/libvirt/qemu/save
directory is now a symlink to{{ nova_system_home_folder }}/save
to resolve an issue where the default location used by the libvirt managed save command can result with the root partitions on compute nodes becoming full whennova image-create
is run on large instances.
Aodh has deprecated support for NoSQL storage (MongoDB and Cassandra) in Mitaka with removal scheduled for the O* release. This causes warnings in the logs. The default of using MongoDB storage for Aodh is replaced with the use of Galera. Continued use of MongoDB will require the use of vars to specify a correct
aodh_connection_string
and add pymongo to theaodh_pip_packages
list.
The
--compact
flag has been removed from xtrabackup options. This had been shown to cause crashes in some SST situations
Other Notes¶
nova_libvirt_live_migration_flag
is now phased out. Please create a nova configuration override withlive_migration_tunnelled: True
if you want to force the flagVIR_MIGRATE_TUNNELLED
to libvirt. Nova „chooses a sensible default“ otherwise.
nova_compute_manager
is now phased out.
The in tree „ansible.cfg“ file in the playbooks directory has been removed. This file was making compatibility difficult for deployers who need to change these values. Additionally this files very existance forced Ansible to ignore any other config file in either a users home directory or in the default „/etc/ansible“ directory.
Mariadb version upgrade gate checks removed.
The
run-playbooks.sh
script has been refactored to run all playbooks using our core tool set and run order. The refactor work updates the old special case script to a tool that simply runs the integrated playbooks as they’ve been designed.