Pike Series Release Notes¶
16.0.25¶
Bug Fixes¶
With the release of CentOS 7.6, deployments were breaking and becoming very slow when we restart dbus in order to catch some PolicyKit changes. However, those changes were never actaully used so they were happening for no reason. We no longer make any modifications to the systemd-machined configuration and/or PolicyKit to maintain upstream compatibility.
The
ceph-ansible
individual role repositories were all removed from github on around 16 January 2019, causing the bootstrap-ansible script to fail for any new deployments, or during upgrades for environments which had these roles previously. To replace them, theceph-ansible
git repository is used instead. In order to prevent clashes between the old and new roles, the existing/etc/ansible/roles/ceph*
folders should be removed prior to runningscripts/bootstrap-ansible.sh
during the minor upgrade process.
16.0.24¶
New Features¶
Horizon has, since OSA’s inception, been deployed with HTTPS access enabled, and has had no way to turn it off. Some use-cases may want to access via HTTP instead, so this patch enables the following.
Listen via HTTPS on a load balancer, but via HTTP on the horizon host and have the load balancer forward the correct headers. It will do this by default in the integrated build due to the presence of the load balancer, so the current behaviour is retained.
Enable HTTPS on the horizon host without a load balancer. This is the role’s default behaviour which matches what it always has been.
Disable HTTPS entirely by setting
haproxy_ssl: no
(which will also disable https on haproxy. This setting is inherited by the newhorizon_enable_ssl
variable by default. This is a new option.
16.0.19¶
New Features¶
It is now possible to specify a list of tests for tempest to blacklist when executing using the
tempest_test_blacklist
list variable.
16.0.16¶
Bug Fixes¶
The conditional that determines whether the
sso_callback_template.html
file is deployed for federated deployments has been fixed.
16.0.15¶
New Features¶
The option
rabbitmq_erlang_version_spec
has been added allowing deployers to set the version of erlang used on a given installation.
Known Issues¶
With the release of CentOS 7.5, all pike releases are broken due to a mismatch in version between the libvirt-python library specified by the OpenStack community, and the version provided in CentOS 7.5. As such OSA is unable build the appropriate python library for libvirt. The only recourse for this is to upgrade the environment to the latest queens release.
Deprecation Notes¶
The use of the
apt_package_pinning
role as a meta dependency has been removed from the rabbitmq_server role. While the package pinning role is still used, it will now only be executed when the apt task file is executed.
The variable
nova_compute_pip_packages
is no longer used and has been removed.
Bug Fixes¶
Download and install the neutron-fwaas-dashboard if being enabled within the os_horizon role by the
horizon_enable_neutron_fwaas
var.
In order to prevent further issues with a libvirt and python-libvirt version mismatch, KVM-based compute nodes will now use the distribution package python library for libvirt. This should resolve the issue seen with pike builds on CentOS 7.5.
16.0.14¶
Known Issues¶
All OSA releases earlier than 17.0.5, 16.0.4, and 15.1.22 will fail to build the rally venv due to the release of the new cmd2-0.9.0 python library. Deployers are encouraged to update to the latest OSA release which pins to an appropriate version which is compatible with python2.
Recently the spice-html5 git repository was entirely moved from
https://github.com/SPICE/spice-html5
tohttps://gitlab.freedesktop.org/spice/spice-html5
. This results in a failure in the git clone stage of therepo-build.yml
playbook for OSA pike releases earlier than16.0.14
. To fix the issue, deployers may upgrade to the most recent release, or may implement the following override inuser_variables.yml
.nova_spicehtml5_git_repo: https://gitlab.freedesktop.org/spice/spice-html5.git
Upgrade Notes¶
The distribution package lookup and data output has been removed from the py_pkgs lookup so that the repo-build use of py_pkgs has reduced output and the lookup is purpose specific for python packages only.
Bug Fixes¶
Newer releases of CentOS ship a version of libnss that depends on the existance of /dev/random and /dev/urandom in the operating system in order to run. This causes a problem during the cache preparation process which runs inside chroot that does not contain this, resulting in errors with the following message:
error: Failed to initialize NSS library
This has been resolved by introducing a /dev/random and /dev/urandom inside the chroot-ed environment.
16.0.13¶
Known Issues¶
In the
lxc_hosts
role execution, we make use of the images produced on a daily basis by images.linuxcontainers.org. Recent changes in the way those images are produced have resulted in changes to the default/etc/resolve.conf
in that default image. As such, when executing the cache preparation it fails. For pike releases prior to 16.0.13 the workaround to get past the error is to add the following to the/etc/openstack_deploy/user_variables.yml
file.lxc_cache_prep_pre_commands: "rm -f /etc/resolv.conf || true" lxc_cache_prep_post_commands: "ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf -f"
16.0.12¶
New Features¶
When venvwithindex=True and ignorerequirements=True are both specified in tempest_git_install_fragments (as was previously the default), this results in tempest being installed from PyPI without any constraints being applied. This could result in the version of tempest being installed in the integrated build being different than the version being installed in the independent role tests. Going forward, we remove the tempest_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs tempest from PyPI, but with appropriate constraints applied.
16.0.11¶
New Features¶
Added the ability to configure vendor data for Nova in order to be able to push things via the metadata service or config drive.
The default variable nova_default_schedule_zone was previously set by default to
nova
. This default has been removed to allow the default to be set by the nova code instead. Deployers wishing to maintain the default availability zone of nova must now set the variable as a user_variables.yml or group_vars override.
16.0.10¶
New Features¶
An option to disable the
machinectl
quota system has been added. The variablelxc_host_machine_quota_disabled
is a Boolean with a default of true. When this option is set to true it will disable themachinectl
quota system.
Upgrade Notes¶
Users should purge the ‘ntp’ package from their hosts if ceph-ansible is enabled. ceph-ansible previously was configured to install ntp by default which conflicts with the OSA ansible-hardening role chrony service.
The variable
lxc_host_machine_volume_size
now accepts any valid size modifier acceptable bytruncate -s
andmachinectl set-limit
. prior to this change the option assumed an integer was set for some value in gigabytes. All acceptable values can be seen within the documentation for machinectl
Bug Fixes¶
ceph-ansible is no longer configured to install ntp by default, which creates a conflict with OSA’s ansible-hardening role that is used to implement ntp using ‘chrony’.
Other Notes¶
The variable
lxc_host_machine_volume_size
is used to set the size of the default sparse file as well as define a limit within themachinectl
quota system. When themachinectl
quota system is enabled deployers should appropriately set this value to the size of the container volume, even when not using a sparse file.
The container image cache within machinectl has been set to “64G” by default.
16.0.9¶
New Features¶
When using Glance and NFS the NFS mount point will now be managed using a systemd mount unit file. This change ensures the deployment of glance is not making potentially system impacting changes to the
/etc/fstab
and modernizes how we deploy glance when using shared storage.
New variables have been added to the glance role allowing a deployer to set the UID and GID of the glance user. The new options are,
glance_system_user_uid
andglance_system_group_uid
. These options are useful when deploying glance with shared storage as the back-end for images and will only set the UID and GID of the glance user when defined.
Deprecation Notes¶
The
galera_client_opensuse_mirror_obs_url
variable has been removed since the OBS repository is no longer used to install the MariaDB packages.
Other Notes¶
The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‘!$HOSTNAME’ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.
16.0.8¶
Known Issues¶
All the pike versions 16.0.7 and before use mariadb-server 10.1 with no minor version frozen. The latest version, 10.1.31, has presented problems with the state transfer for multi-node environments when the variable
galera_wsrep_sst_method
is set toxtrabackup-v2
(the default value). This causes a new cluster to fail, or an existing cluster to be unable to transfer state when a node is rebooted.To work around this issue, the recommendation is to set the following overrides in
/etc/openstack_deploy/user_variables.yml
to ensure that the last known good version of MariaDB is used.From 16.0.8 onwards, these values are set as defaults and will be updated from time to time after verifying that the new versions work. As such, setting these overrides is not required for 16.0.8 onwards.
# Use these values for Ubuntu galera_repo_url: https://downloads.mariadb.com/MariaDB/mariadb-10.1.30/repo/ubuntu galera_client_repo_url: "{{ galera_repo_url }}" # Use these overrides for CentOS/RHEL: galera_repo_url: https://downloads.mariadb.com/MariaDB/mariadb-10.1.30/yum/centos7-amd64/ galera_client_repo_url: "{{ galera_repo_url }}" # Use these values for SuSE galera_repo_url: https://downloads.mariadb.com/MariaDB/mariadb-10.1.30/yum/opensuse42-amd64 galera_client_repo_url: "{{ galera_repo_url }}"
The problem has been registered upstream and progresson the issue can be followed there: https://jira.mariadb.org/browse/MDEV-15254
For all pike releases up to 16.0.7 when executing the os-nova-install.yml playbook the
nova-novncproxy
andnova-spicehtml5proxy
services will fail. The workaround to resolve this issue is to restart the services.cd /opt/rpc-openstack/openstack-ansible/playbooks # start the service again # replace nova-novncproxy with nova-spicehtml5proxy when appropriate ansible nova_console -m service -a 'name=nova-novncproxy state=restarted' # set the appropriate facts to prevent the playbook trying # to reload it again when the playbook is run again ansible nova_console -m ini_file -a 'dest=/etc/ansible/facts.d/openstack_ansible.fact section=nova option=need_service_restart value=False'
This issue has been resolved in the 16.0.8 release.
16.0.7¶
New Features¶
The
lxcbr0
bridge now allows NetworkManager to control it, which allows for networks to start in the correct order when the system boots. In addition, theNetworkManager-wait-online.service
is enabled to ensure that all services that require networking to function, such askeepalived
, will only start when network configuration is complete. These changes are only applied if a deployer is actively using NetworkManager in their environment.
HAProxy services that use backend nodes that are not in the Ansible inventory can now be specified manually by setting
haproxy_backend_nodes
to a list ofname
andip_addr
settings.
Other Notes¶
This version is the minimum version necessary to upgrade from Pike to Queens for deployers using ANSIBLE_ROLE_FETCH_MODE=”git-clone”.
16.0.6¶
New Features¶
A new variable has been added to allow a deployer to control the restart of containers from common-tasks/os-lxc-container-setup.yml. This new option is
lxc_container_allow_restarts
and has a default oftrue
. If a deployer wishes to disable the auto-restart functionality they can set this value tofalse
and automatic container restarts will be disabled. This is a complement to the same option already present in the lxc_container_create role. This option is useful to avoid uncoordinated restarts of galera or rabbitmq containers if the LXC container configuration changes in a way that requires a restart.
The galera cluster now supports cluster health checks over HTTP using port 9200. The new cluster check ensures a node is healthy by running a simple query against the wsrep sync status using monitoring user. This change will provide for a more robust cluster check ensuring we have the most fault tolerant galera cluster possible.
Galera healthcheck has been improved, and relies on an xinetd service. By default, the service is unaccessible (filtered with the no_access directive). You can override the directive by setting any xinetd valid value to
galera_monitoring_allowed_source
.
Other Notes¶
CentOS deployments require a special COPR repository for modern LXC packages. The COPR repository is not mirrored at this time and this causes failed gate tests and production deployments.
The role now syncs the LXC packages down from COPR to each host and builds a local LXC package repository in /opt/thm-lxc2.0. This greatly reduces the amount of times that packages must be downloaded from the COPR server during deployments, which will reduce failures until the packages can be hosted with a more reliable source.
In addition, this should speed up playbook runs since
yum
can check a locally-hosted repository instead of a remote repository with availability and performance challenges.
16.0.5¶
New Features¶
FWaaS V2 has been added to neutron. To enable this service simply add “firewall_v2” to the “neutron_plugin_base” list.
The maximum amount of time to wait until forcibly failing the LXC cache preparation process is now configurable using the
lxc_cache_prep_timeout
variable. The value is specified in seconds, with the default being 20 minutes.
A new LXC container template has been added which will allow us to better manage containers on the host machines we support. The new template uses the machinectl command to create container rootfs using the existing cache. This in-turn will provide easier management of container images, faster build times, and the ability to instantly clone a container (or a given variant) without impacting a containers state. This new lxc container create template, and the features it provides, will only impact new containers created allowing deployers to safely adopt this change in any existing environment.
The tag options when creating an LXC container have been simplified. The two tags now supported by the lxc_container_create role are lxc-{create,config}.
The
security_sshd_permit_root_login
setting can now be set to change thePermitRootLogin
setting in/etc/ssh/sshd_config
to any of the possible options. Setsecurity_sshd_permit_root_login
to one ofwithout-password
,prohibit-password
,forced-commands-only
,yes
orno
.
Searching for world-writable files is now disabled by default. The search causes delays in playbook runs and it can consume a significant amount of CPU and I/O resources. Deployers can re-enable the search by setting
security_find_world_writable_dirs
toyes
.
Upgrade Notes¶
The glance registry service for the v2 API is now disabled by default as it is not required and is scheduled to be removed in the future. The service can be enabled by setting
glance_enable_v2_registry
toTrue
. As the glance v1 API is still enabled by default, and it requires the registry service, the glance-registry service will still remain running and operational as before. If the variableglance_enable_v1_api
is set toFalse
then both the v1 API and the registry service will be disabled and removed.
The LXC container create option lxc_container_backing_store is now defined by default and has a value of “dir”. Prior to this release the backend store option was using several auto-detection methods to try and guess the store type based on facts fed into the role and derived from the physical host. While the auto-detection methods worked, they created a cumbersome set of conditionals and limited our ability to leverage additional container stores. Having this option be a default allows deployers to mix and match container stores to suit the needs of the deployment. Existing deployments should set this option within group or user variables to ensure there’s no change in the backend store when new container be provisioned.
Deprecation Notes¶
The
glance_enable_v1_registry
variable has been removed. When using the glance v1 API the registry service is required, so having a variable to disable it makes little sense. The service is now enabled/disabled for the v1 API using theglance_enable_v1_api
variable.
Bug Fixes¶
When the
glance_enable_v2_registry
variable is set toTrue
the correspondingdata_api
setting is now correctly set. Previously it was not set and therefore the API service was not correctly informed that the registry was operating.
Other Notes¶
The LXC container create role will now check for the LXC volume group if the option lxc_container_backing_store is set to “lvm”. If this volume group is not found, the role will halt and instruct the deployer to update their configuration options and inspect their host setup.
16.0.4¶
New Features¶
Adds a new flag to enable Octavia V2 API (disabled by default) to facilitate to run Octavia stand alone (without Neutron)
Adds a new flag to toggle Octavia V1 API (the API needed to run in conjunction with Neutron) and enables it by default.
Deployers can set
lxc_hosts_opensuse_mirror_url
to use their preferred mirror for the openSUSE repositories. They can also set thelxc_hosts_opensuse_mirror_obs_url
if they want to set a different mirror for the OBS repositories. If they want to use the same mirror in both cases then they can leave the latter variable to its default value. The full list of mirrors and their capabilities can be obtained at http://mirrors.opensuse.org/
Deployers can set
pip_install_opensuse_mirror_url
to use their preferred mirror for the openSUSE repositories. They can also set thepip_install_opensuse_mirror_obs_url
if they want to set a different mirror for the OBS repositories. If they want to use the same mirror in both cases then they can leave the latter variable to its default value. The full list of mirrors and their capabilities can be obtained at http://mirrors.opensuse.org/
The
tempest_images
data structure for theos_tempest
role now expects the values for each image to includename
(optionally) andformat
(the disk format). Also, the optional variablechecksum
may be used to set the checksum expected for the file in the format<algorithm>:<checksum>
.
The default location for the image downloads in the
os_tempest
role set by thetempest_image_dir
variable has now been changed to be/opt/cache/files
in order to match the default location in nodepool. This improves the reliability of CI testing in OpenStack CI as it will find the file already cached there.
A new variable has been introduced into the
os_tempest
role namedtempest_image_downloader
. When set todeployment-host
(which is the default) it uses the deployment host to handle the download of images to be used for tempest testing. The images are then uploaded to the target host for uploading into Glance.
Enable Kernel Shared Memory support by setting
nova_compute_ksm_enabled
toTrue
.
Deprecation Notes¶
The following variables have been removed from the
os_tempest
role to simplify it. They have been replaced through the use of the data structuretempest_images
which now has equivalent variables per image. - cirros_version - tempest_img_url - tempest_image_file - tempest_img_disk_format - tempest_img_name - tempest_images.sha256 (replaced by checksum)
Security Issues¶
The following headers were added as additional default (and static) values. X-Content-Type-Options nosniff, X-XSS-Protection “1; mode=block”, and Content-Security-Policy “default-src ‘self’ https: wss:;”. Additionally, the X-Frame-Options DENY header was added, defaulting to DENY. You may override the header via the keystone_x_frame_options variable.
Bug Fixes¶
The
os_tempest
tempest role was downloading images twice - once arbitrarily, and once to use for testing. This has been consolidated into a single download to a consistent location.
16.0.3¶
New Features¶
The installation of Erlang and is now optimized for CentOS. Erlang 19.x is now installed via a single package that is maintained by RabbitMQ developers and it provides the minimal features required for RabbitMQ to function. It also includes HiPE support for increased performance.
The version of Erlang is kept constant using yum’s versionlock plugin.
RabbitMQ is now installed via an RPM repository provided by RabbitMQ developers. The version is kept constant via yum’s versionlock plugin. This allows the tasks to lock the RabbitMQ version to a particular revision and prevent changes to that version.
Upgrade Notes¶
The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.
The ceph-ansible common roles are no longer namespaced with a galaxy-style ‘.’ (ie.
ceph.ceph-common
is now cloned asceph-common
), due to a change in the way upstream meta dependencies are handled in the ceph roles. The roles will be cloned according to the new naming, and an upgrade playbookceph-galaxy-removal.yml
has been added to clean up the stale galaxy-named roles.
Critical Issues¶
The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.
Bug Fixes¶
The
sysstat
package was installed on all distributions, but it was only configured to run on Ubuntu and OpenSUSE. It would not run on CentOS due to bad SELinux contexts and file permissions on/etc/cron.d/sysstat
. This has been fixed andsysstat
now runs properly on CentOS.
16.0.2¶
Security Issues¶
The
net.bridge.bridge-nf-call-*
kernel parameters were set to0
in previous releases to improve performance and it was left up to neutron to adjust these parameters when security groups are applied. This could cause situations where bridge traffic was not sent through iptables and this rendered security groups ineffective. This could allow unexpected ingress and egress traffic within the cloud.These kernel parameters are now set to
1
on all hosts by theopenstack_hosts
role, which ensures that bridge traffic is always sent through iptables.
16.0.1¶
New Features¶
The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.
Upgrade Notes¶
The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.
16.0.0¶
Prelude¶
The first release of the Red Hat Enterprise Linux 7 STIG was entirely renumbered from the pre-release versions. Many of the STIG configurations simply changed numbers, but some were removed or changed. A few new configurations were added as well.
New Features¶
CentOS7/RHEL support has been added to the ceph_client role.
Only Ceph repos are supported for now.
There is now experimental support to deploy OpenStack-Ansible on CentOS 7 for both development and test environments.
Simplifies configuration of lbaas-mgmt network.
Adds iptables rules to block taffic from the octavia managment network to the octavia container for both ipv4 and ipv6.
Experimental support has been added to allow the deployment of the OpenStack Octavia Load Balancing service when hosts are present in the host group
octavia-infra_hosts
.
OpenStack-Ansible now supports the openSUSE Leap 42.X distributions mainly targeting the latest 42.3 release.
The os_swift role now supports the swift3 middleware, allowing access to swift via the Amazon S3 API. This feature can enabled by setting
swift_swift3_enabled
totrue
.
A variable named
bootstrap_user_variables_template
has been added to the bootstrap-host role so the user can define the user variable template filename for AIO deployments
Adds a way for the system to automatically create the Octavia management network if octavia_service_net_setup is enabled (DEFAULT). Additional parameters can control the setup.
Adds support for glance-image-id and automatic uploading of the image if octavia_amp_image_upload_enabled is True (Default is False). This is mostly tp work around the limitations of Ansible’s OpenStack support and should not be used in prodcution settings. Instead refer to the documentation to upload images yourself.
Deployers can now specify a custom package name or URL for an EPEL release package. CentOS systems use
epel-release
by default, but some deployers have a customized package that redirects servers to internal mirrors.
New variables have been added to allow a deployer to customize a aodh systemd unit file to their liking.
The task dropping the aodh systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_aodh
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theaodh_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a barbican systemd unit file to their liking.
The task dropping the barbican systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_barbican
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thebarbican_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The number of worker threads for neutron will now be capped at 16 unless a specific value is specified. Previously, the calculated number of workers could get too high on systems with a large number of processors. This was particularly evident on POWER systems.
Capping the default value for the variable
aodh_wsgi_processes
to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
cinder_osapi_volume_workers
to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variables
glance_api_workers
andglance_registry_workers
to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
gnocchi_wsgi_processes
to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variables
heat_api_workers
andheat_engine_workers
to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variables
horizon_wsgi_processes
andhorizon_wsgi_threads
to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
ironic_wsgi_processes
to 16 when the user doesn’t configure this variable. Default value is one fourth the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
keystone_wsgi_processes
to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variables
neutron_api_workers
,neutron_num_sync_threads
andneutron_metadata_workers
to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variables
nova_wsgi_processes
,nova_osapi_compute_workers
,nova_metadata_workers
andnova_conductor_workers
to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
repo_nginx_workers
to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
sahara_api_workers
to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
Capping the default value for the variable
swift_proxy_server_workers
to 16 when the user doesn’t configure this variable and if the swift proxy is in a container. Default value is half the number of vCPUs available on the machine if the swift proxy is not in a container. Default value is half the number of vCPUs available on the machine with a capping value of 16 if the proxy is in a container.
New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.
The task dropping the ceilometer systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and pollute the generic systemd unit file with jinja2 variables and conditionals.
New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.
The task dropping the ceilometer systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_ceilometer
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theceilometer_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Several configuration files that were not templated for the
os_ceilometer
role are now retrieved from git. The git repository used can be changed using theceilometer_git_config_lookup_location
variable. By default this points togit.openstack.org
. These files can still be changed using theceilometer_x_overrides
variables.
Deployers can set
pip_install_centos_mirror_url
to use their preferred mirror for the RDO repositories.
A new variable called
ceph_extra_components
is available for the ceph_client role. Extra components, packages, and services that are not shipped by default by OpenStack-Ansible can be defined here.
Added
cinder_auth_strategy
variable to configure Cinder’s auth strategy since Cinder can work in noauth mode as well.
The
os_cinder
role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variablecinder_all_software_updated
is true. This variable will need to be set by the playbook consuming the role.
New variables have been added to allow a deployer to customize a cinder systemd unit file to their liking.
The task dropping the cinder systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
The
os-cinder-install.yml
playbook will now execute a rolling upgrade of cinder including database migrations (both schema and online) as per the procedure described in the cinder documentation. When haproxy is used as the load balancer, the backend being changed will be drained before changes are made, then added back to the pool once the changes are complete.
Add support for the cinder v3 api. This is enabled by default, but can be disabled by setting the
cinder_enable_v3_api
variable tofalse
.
For the
os_cinder
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thecinder_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
cinder-api
service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing thecinder_wsgi_processes_max
,cinder_wsgi_processes
,cinder_wsgi_threads
, andcinder_wsgi_buffer_size
. Additionally, you can override any settings in the uWSGI ini configuration file using thecinder_api_uwsgi_ini_overrides
setting. The uWSGI application will listen on the address specified bycinder_uwsgi_bind_address
which defaults to0.0.0.0
.
Tags have been added to all of the common tags with the prefix “common-”. This has been done to allow a deployer to rapidly run any of the common on a need basis without having to rerun an entire playbook.
The config_template template module now supports writing out valueless INI options without suffixing them with ‘=’ or ‘:’. This is done via the ‘ignore_none_type’ attribute. If ignore_none_type is set to true, these key/value entries will be ignored, if it’s set to false, then ConfigTemplateParser will write out only the option name without the ‘=’ or ‘:’ suffix. The default is true.
The COPR repository for installing LXC on CentOS 7 is now set to a higher priority than the default to ensure that LXC packages always come from the COPR repository.
Deployers can provide a customized login banner via a new Ansible variable:
security_login_banner_text
. This banner text is used for non-graphical logins, which includes console and ssh logins.
The
os_ceilometer
role now includes a facility where you can place your own templates in/etc/openstack_deploy/ceilometer
(by default) and it will be deployed to the target host after being interpreted by the template engine. If no file is found there, the fallback of the git sourced template is used.
The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This allows users to populate the Designate DNS server configuration using attributes from other plays and obviates the need to manage the file outside of the Designate role.
For the
os_designate
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thedesignate_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
A new repository for installing modern erlang from ESL (erlang solutions) has been added giving us the ability to install and support modern stable erlang over numerous operating systems.
The ability to set the RabbitMQ repo URL for both erlang and RabbitMQ itself has been added. This has been done to allow deployers to define the location of a given repo without having to fully redefine the entire set of definitions for a specific repository. The default variables rabbitmq_gpg_keys, rabbitmq_repo_url, and rabbitmq_erlang_repo_url have been created to facilitate this capability.
Extra headers can be added to Keystone responses by adding items to
keystone_extra_headers
. Example:keystone_extra_headers: - parameter: "Access-Control-Expose-Headers" value: "X-Subject-Token" - parameter: "Access-Control-Allow-Headers" value: "Content-Type, X-Auth-Token" - parameter: "Access-Control-Allow-Origin" value: "*"
Fedora 26 is now supported.
The
galera_client
role will default to using thegalera_repo_url
URL if the value for it is set. This simplifies using an alternative mirror for the MariaDB server and client as only one variable needs to be set to cover them both.
The get_nested filter has been added, allowing for simplified value lookups inside of nested dictionaries. ansible_local|get_nested(‘openstack_ansible.swift’), for example, will look 2 levels down and return the result.
New variables have been added to allow a deployer to customize a glance systemd unit file to their liking.
The task dropping the glance systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_glance
role, the systemd unitRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using theglance_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
glance-api
service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing theglance_wsgi_processes_max
,glance_wsgi_processes
,glance_wsgi_threads
, andglance_wsgi_buffer_size
. Additionally, you can override any settings in the uWSGI ini configuration file using theglance_api_uwsgi_ini_overrides
setting.
The
os_gnocchi
role now includes a facility where you can place your own defaultapi-paste.ini
orpolicy.json
file in/etc/openstack_deploy/gnocchi
(by default) and it will be deployed to the target host after being interpreted by the template engine.
New variables have been added to allow a deployer to customize a gnocchi systemd unit file to their liking.
The task dropping the gnocchi systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_gnocchi
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thegnocchi_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Several configuration files that were not templated for the
os_gnocchi` role are now retrieved from git. The git repository used can be changed using the ``gnocchi_git_config_lookup_location
variable. By default this points togit.openstack.org
. These files can still be changed using thegnocchi_x_overrides
variables.
From now on, a deployer can override any group_var in userspace, by creating a folder
/etc/openstack_deploy/group_vars/
. This folder has precedence over OpenStack-Ansible default group_vars, and the merge behavior is similar to Ansible merge behavior. The group_vars folder precedence can still be changed with the GROUP_VARS_PATH. Same applies for host vars.
The new option haproxy_backend_arguments can be utilized to add arbitrary options to a HAProxy backend like tcp-check or http-check.
New variables have been added to allow a deployer to customize a heat systemd unit file to their liking.
The task dropping the heat systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_heat
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theheat_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
heat-api
,heat-api-cfn
, andheat-api-cloudwatch
services have moved to run as a uWSGI applications. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing theheat_wsgi_processes_max
,heat_wsgi_processes
,heat_wsgi_threads
, andheat_wsgi_buffer_size
. Additionally, you can override any settings in the uWSGI ini configuration file using theheat_api_uwsgi_ini_overrides
,heat_api_cfn_uwsgi_ini_overrides
, andheat_api_cloudwatch_uwsgi_ini_overrides
settings. The uWSGI applications will listen on the addresses specified byheat_api_uwsgi_bind_address
,heat_api_cfn_uwsgi_bind_address
, andheat_api_cloudwatch_uwsgi_bind_address
respectively. Which all default to0.0.0.0
.
It’s now possible to disable heat stack password field in horizon.
horizon_enable_heatstack_user_pass
variable has been added and default to True.
The
horizon_images_allow_location
variable is added to support theIMAGES_ALLOW_LOCATION
setting in the horizon_local_settings.py file to allow to specify and external location during the image creation.
Allows SSL connection to Galera with SSL support.
galera_use_ssl
option has to be set totrue
, in this case self-signed CA cert or user-provided CA cert will be delivered to the container/host.
Implements SSL connection ability to MySQL.
galera_use_ssl
option has to be set totrue
(default), in this case playbooks create self-signed SSL bundle and sets up MySQL configs to use it or distributes user-provided bundle throughout Galera nodes.
Haproxy-server role allows to set up tunable parameters. For doing that it is necessary to set up a dictionary of options in the config files, mentioning those which have to be changed (defaults for the remaining ones are programmed in the template). Also “maxconn” global option made to be tunable.
New variables have been added to allow a deployer to customize a ironic systemd unit file to their liking.
The task dropping the ironic systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_ironic
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theironic_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
New variables have been added to allow a deployer to customize a keystone systemd unit file to their liking.
The task dropping the keystone systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
The
os_keystone
role will now (by default) source thekeystone-paste.ini
,policy.json
andsso_callback_template.html
templates from the service git source instead of from the role. It also now includes a facility where you can place your own templates in/etc/openstack_deploy/keystone
(by default) and it will be deployed to the target host after being interpreted by the template engine.
For the
os_keystone
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thekeystone_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The default behaviour of
ensure_endpoint
in the keystone module has changed to update an existing endpoint, if one exists that matches the service name, type, region and interface. This ensures that no duplicate service entries can exist per region.
It is now possible to use the horizon_launch_instance_defaults variable that allows customizing the default values for properties found in the Launch Instance modal, using the LAUNCH_INSTANCE_DEFAULTS config option. See https://docs.openstack.org/developer/horizon/install/settings.html#launch-instance-defaults
Removed dependency for
cinder_backends_rbd_inuse
in nova.conf when settingrbd_user
andrbd_secret_uuid
variables. Cinder delivers all necessary values via RPC when attaching the volume, so those variables are only necessary for ephemeral disks stored in Ceph. These variables are required to be set up on cinder-volume side under backend section.
LXC on CentOS is now installed via package from a COPR repository rather than installed from the upstream source.
In the lxc_container_create role, the keys
preup
,postup
,predown
, andpostdown
are now supported in thecontainer_networks
dict for Ubuntu systems. This allows operators to configure custom scripts to be run by Ubuntu’s ifupdown system when network interface states are changed.
The variable
lxc_image_cache_server_mirrors
has been added to the “lxc_hosts” role. This is a list type variable and gives deployers the ability to specify multiple lxc-image mirrors at the same time.
The variable
lxc_net_manage_iptables
has been added. This variable can be overridden by deployers if system wide iptables rules are already in place or managed by deployers chioce.
New variables have been added to allow a deployer to customize a magnum systemd unit file to their liking.
The task dropping the magnum systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_magnum
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using themagnum_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
The
dragonflow
plugin for neutron is now available. You can set theneutron_plugin_type
toml2.dragonflow
to utilize this code path. Thedragonflow
code path is currently experimental.
New variables have been added to allow a deployer to customize a neutron systemd unit file to their liking.
The task dropping the neutron systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
The
os-neutron-install.yml
playbook will now execute a rolling upgrade of neutron including database migrations (both expand and contract) as per the procedure described in the neutron documentation.
For the
os_neutron
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theneutron_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
os_nova
role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variablenova_all_software_updated
is true. This variable will need to be set by the playbook consuming the role.
New variables have been added to allow a deployer to customize a nova systemd unit file to their liking.
The task dropping the nova systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
The
nova-placement
service is now configured by default.nova_placement_service_enabled
can be set toFalse
to disable thenova-placement
service.
The
nova-placement
api service will run as its own ansible groupnova_api_placement
.
Nova cell_v2 support has been added. The default cell is
cell1
which can be overridden by thenova_cell1_name
. Support for multiple cells is not yet available.
The
os-nova-install.yml
playbook will now execute a rolling upgrade of nova including database migrations as per the procedure described in the nova documentation.
Nova may now use an encrypted database connection. This is enabled by setting
nova_galera_use_ssl
toTrue
.
For the
os_nova
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thenova_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
nova-api
, andnova-metadata
services have moved to run as uWSGI applications. You can override their uwsgi configuration files using thenova_api_os_compute_uwsgi_ini_overrides
, andnova_api_metadata_uwsgi_ini_overrides
settings.
New variables have been added to allow a deployer to customize a octavia systemd unit file to their liking.
The task dropping the octavia systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_octavia
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theoctavia_*_init_overrides
variables which use theconfig_template
task to change template defaults.
The
octavia-api
service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing theoctavia_wsgi_processes_max
,octavia_wsgi_processes
,octavia_wsgi_threads
, andoctavia_wsgi_buffer_size
. Additionally, you can override any settings in the uWSGI ini configuration file using theoctavia_api_uwsgi_ini_overrides
setting. The uWSGI application will listen on the address specified byoctavia_uwsgi_bind_address
which defaults to0.0.0.0
.
Deployers may provide a list of custom haproxy template files to copy from the deployment host through the
octavia_user_haproxy_templates
variable and configure Octavia to make use of a custom haproxy template file with withoctavia_haproxy_amphora_template
variable.
The password minimum and maximum lifetimes are now opt-in changes that can take action against user accounts instead of printing debug warnings. Refer to the documentation for STIG requirements V-71927 and V-71931 to review the opt-in process and warnings.
Added the
lxc_container_recreate
option, which will destroy then recreate LXC containers. The container names and IP addresses will remain the same, as will the MAC addresses of any containers using thelxc_container_fixed_mac
setting.
Gnocchi is now used as the default publisher.
In the Ocata release, Trove added support for encrypting the rpc communication between the guest DBaaS instances and the control plane. The default values for
trove_taskmanager_rpc_encr_key
andtrove_inst_rpc_key_encr_key
should be overridden to specify installation specific values.
Added storage policy so that deployers can override how to store the logs.
per_host
stores logs in a sub-directory per host.per_program
stores logs in a single file per application which facilitates troubleshooting easy.
New variables have been added to allow a deployer to customize a sahara systemd unit file to their liking.
The task dropping the sahara systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_sahara
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thesahara_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
sahara-api
service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing thesahara_wsgi_processes_max
,sahara_wsgi_processes
,sahara_wsgi_threads
, andsahara_wsgi_buffer_size
. Additionally, you can override any settings in the uWSGI ini configuration file using thesahara_api_uwsgi_ini_overrides
setting. The uWSGI application will listen on the address specified bysahara_uwsgi_bind_address
which defaults to0.0.0.0
.
MAC addresses for containers with a fixed MAC (lxc_container_fixed_mac variable) are now saved to the
/etc/ansible/facts.d/mac.fact
file. Should such a container be destroyed but not removed from inventory, the interfaces will be recreated with the same MAC address when the container is recreated.
You can set the
endpoint_type
used when creating the Trove service network by specifying thetrove_service_net_endpoint_type
variable. This will default tointernal
. Other possible options arepublic
andadmin
.
The ability to disable the certificate validation when checking and interacting with the internal cinder endpoint has been implemented. In order to do so, set the following in
/etc/openstack_deploy/user_variables.yml
.cinder_service_internaluri_insecure: yes
The role now supports SUSE based distributions. Required packages can now be installed using the zypper package manager.
The role now supports SUSE based distributions. Required packages can now be installed using the zypper package manager.
New variables have been added to allow a deployer to customize a swift systemd unit file to their liking.
The task dropping the swift systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_swift
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theswift_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Swift container-sync has been updated to use
internal-client
. This means a new configuration fileinternal-client.conf
has been added. Configuration can be overridden using the variableswift_internal_client_conf_overrides
.
New variables have been added to allow a deployer to customize a trove systemd unit file to their liking.
The task dropping the trove systemd unit files now uses the
config_template
action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
For the
os_trove
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thetrove_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Add support for Ubuntu on IBM z Systems (s390x).
Add support for Ubuntu on IBM z Systems (s390x).
The default ulimit for RabbitMQ is now 65536. Deployers can still adjust this limit using the
rabbitmq_ulimit
Ansible variable.
Added new variable
tempest_volume_backend_names
and updated templates/tempest.conf.j2 to pointbackend_names
at this variable
You can force update the translations direct from Zanata by setting
horizon_translations_update
toTrue
. This will call thepull_catalog
option built intohorizon-manage.py
. You should only use this when testing translations, otherwise this should remain set to the default ofFalse
.
The deployer can now define an environment variable
GROUP_VARS_PATH
with the folders of its choice (separated by the colon sign) to define an user space group_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default group vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in GROUP_VARS_PATH wins)
The deployer can now define an environment variable
HOST_VARS_PATH
with the folders of its choice (separated by the colon sign) to define an user space host_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default host vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in HOST_VARS_PATH wins)
Known Issues¶
Ceph storage backend is known not to work on openSUSE Leap 42.X yet. This is due to missing openSUSE support in the upstream Ceph Ansible playbooks.
There is currently an Ansible bug in regards to
HOSTNAME
. If the host.bashrc
holds a var namedHOSTNAME
, the container where thelxc_container
module attaches will inherit this var and potentially set the wrong$HOSTNAME
. See the Ansible fix which will be released in Ansible version 2.3.
OpenStack-Ansible sets a new variable, galera_disable_privatedevices, that controls whether the PrivateDevices configuration in MariaDB’s systemd unit file is enabled.
If the galera_server role is deployed on a bare metal host, the MariaDB default is maintained (PrivateDevices=true). If the galera_server role is deployed within a container, the PrivateDevices configuration is set to true to work around a systemd bug with a bind mounted /dev/ptmx.
See Launchpad Bug 1697531 for more details.
OpenStack-Ansible sets a new variable, memcached_disable_privatedevices, that controls whether the PrivateDevices configuration in MemcacheD’s systemd unit file is enabled.
If the memcached_server role is deployed on a bare metal host, the default is maintained (PrivateDevices=true). If the role is deployed within a container, the PrivateDevices configuration is set to true to work around a systemd bug with a bind mounted /dev/ptmx.
See Launchpad Bug 1697531 for more details.
MemcacheD sets PrivateDevices=true in its systemd unit file to add extra security around mount namespaces. While this is useful when running MemcacheD on a bare metal host with other services, it is less useful when MemcacheD is already in a container with its own namespaces. In addition, LXC 2.0.8 presents /dev/ptmx as a bind mount within the container and systemd 219 (on CentOS 7) cannot make an additional bind mount of /dev/ptmx when PrivateDevices is enabled.
Deployers can memcached_disable_privatedevices to yes to set PrivateDevices=false in the systemd unit file for MariaDB on CentOS 7. The default is no, which keeps the default systemd unit file settings from the MemcacheD package.
For additional information, refer to the following bugs:
MariaDB 10.1+ includes PrivateDevices=true in its systemd unit files to add extra security around mount namespaces for MariaDB. While this is useful when running MariaDB on a bare metal host with other services, it is less useful when MariaDB is already in a container with its own namespaces. In addition, LXC 2.0.8 presents /dev/ptmx as a bind mount within the container and systemd 219 (on CentOS 7) cannot make an additional bind mount of /dev/ptmx when PrivateDevices is enabled.
Deployers can galera_disable_privatedevices to yes to set PrivateDevices=false in the systemd unit file for MariaDB on CentOS 7. The default is no, which keeps the default systemd unit file settings from the MariaDB package.
For additional information, refer to the following bugs:
Upgrade Notes¶
For the
os_aodh
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theaodh_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_barbican
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thebarbican_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_ceilometer
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theceilometer_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The variables
cinder_sigkill_timeout
andcinder_restart_wait
have been removed. The previous default values have now been set in the template directly and can be adjusted by using thecinder_*_init_overrides
variables which use theconfig_template
task to change template defaults.
The EPEL repository is only installed and configured when the deployer sets
security_enable_virus_scanner
toyes
. This allows the ClamAV packages to be installed. Ifsecurity_enable_virus_scanner
is set tono
(the default), the EPEL repository will not be added.See Bug 1702167 for more details.
Deployers now have the option to prevent the EPEL repository from being installed by the role. Setting
security_epel_install_repository
tono
prevents EPEL from being installed. This setting may prevent certain packages from installing, such as ClamAV.
The following variables have been removed from the
os_ceilometer
role as their respective upstream files are no longer present. *ceilometer_event_definitions_yaml_overrides
*ceilometer_event_pipeline_yaml_overrides
The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This ability is toggled by the designate_use_pools_yaml_attr attribute. In the future this behavior may become default and designate_pools_yaml may become a required variable.
For the
os_designate
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thedesignate_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The endpoint which designate uses to communicate with neutron has been set to the internalURL by default. This change has been done within the template
designate.conf.j2
and can be changed using thedesignate_designate_conf_overrides
variable.
Changing to the ESL repos has no upgrade impact. The version of erlang provided by ESL is newer than that what is found in the distro repos. Furthermore, a pin has been added to ensure that APT always uses the ESL repos as it’s preferred source which has been done to simply ensure APT is always pointed at ESL.
For the
os_glance
role, the systemd unitRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using theglance_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_gnocchi
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thegnocchi_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_heat
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theheat_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The entire repo build process is now idempotent. From now on when the repo build is re-run, it will only fetch updated git repositories and rebuild the wheels/venvs if the requirements have changed, or a new release is being deployed.
The git clone part of the repo build process now only happens when the requirements change. A git reclone can be forced by using the boolean variable
repo_build_git_reclone
.
The python wheel build process now only happens when requirements change. A wheel rebuild may be forced by using the boolean variable
repo_build_wheel_rebuild
.
The python venv build process now only happens when requirements change. A venv rebuild may be forced by using the boolean variable
repo_build_venv_rebuild
.
The repo build process now only has the following tags, providing a clear path for each deliverable. The tag
repo-build-install
completes the installation of required packages. The tagrepo-build-wheels
completes the wheel build process. The tagrepo-build-venvs
completes the venv build process. Finally, the tagrepo-build-index
completes the manifest preparation and indexing of the os-releases and links folders.
The
haproxy_bufsize
variable has been removed and made a part of thehaproxy_tuning_params
dictionary.
For the
os_ironic
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theironic_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
If you had your own keepalived configuration file, please rename and move it to the openstack-ansible user space, for example by moving it to
`/etc/openstack_deploy/keepalived/keepalived.yml`
. Our haproxy playbook does not load an external variable files anymore. The keepalived variable override system has been standardised to the same method used elsewhere.
The keystone endpoints now have versionless URLs. Any existing endpoints will be updated.
Keystone now uses uWSGI exclusively (instead of Apache with mod_wsgi) and has the web server acting as a reverse proxy. The default web server is now set to Nginx instead of Apache, but Apache will automatically used if federation is configured.
For the
os_keystone
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thekeystone_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The var
lxc_container_ssh_delay
along with SSH specific ping checks have been removed in favor of using Ansible’s wait_for_connection module, which will not rely on SSH to the container to verify connectivity. A new variable calledlxc_container_wait_params
has been added to allow configuration of the parameters passed to thewait_for_connection
module.
The magnum client interaction will now make use of the public endpoints by default. Previously this was set to use internal endpoints.
The keystone endpoints for instances spawned by magnum will now be provided with the public endpoints by default. Previously this was set to use internal endpoints.
For the
os_magnum
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using themagnum_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_neutron
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theneutron_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_nova
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thenova_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
When upgrading nova the cinder
catalog_info
will change to use thecinderv3
endpoint. Ensure that you have upgraded cinder so that thecinderv3
endpoint exists in the keystone catalog.
The
nova-placement
service now runs as a uWSGI application that is not fronted by an nginx web-server by default. After upgrading, if thenova-placement
service was running on a host or container without any other services requiring nginx, you should manually remove nginx.
For the
os_octavia
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theoctavia_*_init_overrides
variables which use theconfig_template
task to change template defaults.
The variable
neutron_dhcp_domain
has been renamed toneutron_dns_domain
.
The
neutron
library has been removed from OpenStack-Ansible’s plugins. Upstream Ansible modules for managing OpenStack network resources should be used instead.
The ceilometer-api service/container can be removed as part of O->P upgrades. A ceilometer-central container will be created to contain the central ceilometer agents.
The following variables have been removed from the
haproxy_server
role as they are no longer necessary or used. - haproxy_repo - haproxy_gpg_keys - haproxy_required_distro_packages
The EPEL repository is now removed in favor of the RDO repository.
This is a breaking change for existing CentOS deployments. The
yum
package manager will have errors when it finds that certain packages that it installed from EPEL are no longer available. Deployers may need to rebuild container or reinstall packages to complete this change.
For the
os_sahara
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thesahara_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
For the
os_swift
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using theswift_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
The
openstack_tempest_gate.sh
script has been removed as it requires the use of therun_tempest.sh
script which has been deprecated in Tempest. In order to facilitate the switch, the default for the variabletempest_run
has been set toyes
, forcing the role to execute tempest by default. This default can be changed by overriding the value tono
. The test whitelist may be set through the list variabletempest_test_whitelist
.
For the
os_trove
role, the systemd unitTimeoutSec
value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. TheRestartSec
value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using thetrove_*_init_config_overrides
variables which use theconfig_template
task to change template defaults.
Gnocchi service endpoint variables were not named correctly. Renamed variables to be consistent with other roles.
Deprecation Notes¶
The
cinder_keystone_auth_plugin
variable has been deprecated.cinder_keystone_auth_type
should be used instead to configure authentication type.
The
neutron_keystone_auth_plugin
variable has been deprecated.neutron_keystone_auth_type
should be used instead to configure authentication type.
The
swift_keystone_auth_plugin
variable has been deprecated.swift_keystone_auth_type
should be used instead to configure authentication type.
The
trove_keystone_auth_plugin
variable has been deprecated.trove_keystone_auth_type
should be used instead to configure authentication type.
The
aodh_keystone_auth_plugin
variable has been deprecated.aodh_keystone_auth_type
should be used instead to configure authentication type.
The
ceilometer_keystone_auth_plugin
variable has been deprecated.ceilometer_keystone_auth_type
should be used instead to configure authentication type.
The
gnocchi_keystone_auth_plugin
variable has been deprecated.gnocchi_keystone_auth_type
should be used instead to configure authentication type.
The
octavia_keystone_auth_plugin
variable has been deprecated.octavia_keystone_auth_type
should be used instead to configure authentication type.
Fedora 25 support is deprecated and no longer tested on each commit.
The variables
galera_client_apt_repo_url
andgalera_client_yum_repo_url
are deprecated in favour of the common variablegalera_client_repo_url
.
The variable
keepalived_uca_enable
is deprecated, and replaced bykeepalived_ubuntu_src
. Thekeepalived_uca_enable
variable will be removed in future versions of the keepalived role. The value ofkeepalived_ubuntu_src
should be either “uca”, “ppa”, or “native”, for respectively installing from the Ubuntu Cloud archive, from keepalived stable ppa, or not installing from an external source.
The variable
keepalived_use_latest_stable
is deprecated, and replaced bykeepalived_package_state
. Thekeepalived_use_latest_stable
variable will be removed in future versions of the keepalived role. The value ofkeepalived_package_state
should be either “latest” or “present”.
The variables
keystone_apache_enabled
andkeystone_mod_wsgi_enabled
have been removed and replaced with a single variablekeystone_web_server
to optionally set the web server used for keystone.
The
update
state for theensure_endpoint
method of thekeystone
module is now deprecated, and will be removed in the Queens cycle. Setting state topresent
will achieve the same result.
The var
lxc_container_ssh_delay
along with SSH specific ping checks have been removed in favor of using Ansible’s wait_for_connection module, which will not rely on SSH to the container.
The variable
lxc_image_cache_server
has been deprecated in the “lxc_hosts” role. By default this value will pull the first item out oflxc_image_cache_server_mirrors
list which is only done for compatibility (legacy) purposes. The default string type variable,lxc_image_cache_server
, will be removed from the “lxc_hosts” role in the in “R” release.
The gnocchi ceph component has been moved out as a default component required by the ceph_client role. It can now be optionally specified through the use of the
ceph_extra_components
variable.
Several
nova.conf
options that were deprecated have been removed from theos_nova
role. The following OpenStack-Ansible variables are no longer used and should be removed from any variable override files. *nova_dhcp_domain
*nova_quota_fixed_ips
*nova_quota_floating_ips
*nova_quota_security_group_rules
*nova_quota_security_groups
Settings related to nginx and the placement will no longer serve any purpose, and should be removed. Those settings are as follows -
nova_placement_nginx_access_log_format_extras
,nova_placement_nginx_access_log_format_combined
,nova_placement_nginx_extra_conf
,nova_placement_uwsgi_socket_port
, andnova_placement_pip_packages
.
The variable
repo_build_pip_extra_index
has been removed. The replacement list variablerepo_build_pip_extra_indexes
should be used instead.
The ceilometer API service is now deprecated. OpenStack-Ansible no longer deploys this service. To make queries against metrics, alarms, and/or events, please use the the gnocchi, aodh, and panko APIs, respectively.
Per https://review.openstack.org/#/c/413920/20, the ceilometer-collector service is now deprecated and its respective container is no longer deployed by default. Gnocchi is now used as the default publisher.
The upstream noVNC developers recommend that the keymap be automatically detected for virtual machine consoles. Three Ansible variables have been removed:
nova_console_keymap
nova_novncproxy_vnc_keymap
nova_spice_console_keymap
Deployers can still set a specific keymap using a nova configuration override if necessary.
The plumgrid network provider has been removed. This is being dropped without a full deprecation cycle because the company, plumgrid, no longer exists.
Remove
cinder_glance_api_version
option due to deprecation of glance_api_version option in Cinder.
Remove
cinder_glance_api_version
option due to deprecation of glance_api_version option in Cinder.
The
nova_cpu_mode
Ansible variable has been removed by default, to allow Nova to detect the default value automatically. Hard-coded values can cause problems. You can still setnova_cpu_mode
to enforce acpu_mode
for Nova. Additionally, the default value for theqemu
libvirt_type
is set tonone
to avoid issues caused withqemu
2.6.0.
Remove
heat_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
octavia_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
keystone_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
cinder_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
trove_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
neutron_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
sahara_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
magnum_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
glance_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Remove
designate_rpc_backend
option due to deprecation of rpc_backend option in oslo.messaging.
Removed
tempest_volume_backend1_name
andtempest_volume_backend1_name
sincebackend1_name
andbackend2_name
were removed from tempest in commit 27905cc (merged 26/04/2016)
Critical Issues¶
A bug that caused the Keystone credential keys to be lost when the playbook is run during a rebuild of the first Keystone container has been fixed. Please see launchpad bug 1667960 for more details.
Security Issues¶
The security role will no longer fix file permissions and ownership based on the contents of the RPM database by default. Deployers can opt in for these changes by setting
security_reset_perm_ownership
toyes
.
The magnum client interaction will now make use of the public endpoints by default. Previously this was set to use internal endpoints.
The keystone endpoints for instances spawned by magnum will now be provided with the public endpoints by default. Previously this was set to use internal endpoints.
Nova may now use an encrypted database connection. This is enabled by setting
nova_galera_use_ssl
toTrue
.
The tasks that search for
.shosts
andshosts.equiv
files (STIG ID: RHEL-07-040330) are now skipped by default. The search takes a long time to complete on systems with lots of files and it also causes a significant amount of disk I/O while it runs.
PermitRootLogin
in the ssh configuration has changed fromyes
towithout-password
. This will only allow ssh to be used to authenticate root via a key.
The latest version of the RHEL 7 STIG requires that a standard login banner is presented to users when they log into the system (V-71863). The security role now deploys a login banner that is used for console and ssh sessions.
The
cn_map
permissions and ownership adjustments included as part of RHEL-07-040070 and RHEL-07-040080 has been removed. This STIG configuration was removed in the most recent release of the RHEL 7 STIG.
The PKI-based authentication checks for RHEL-07-040030, RHEL-07-040040, and RHEL-07-040050 are no longer included in the RHEL 7 STIG. The tasks and documentation for these outdated configurations are removed.
Bug Fixes¶
In Ubuntu the
dnsmasq
package actually includes init scripts and service configuration which conflict with LXC and are best not included. The actual dependent package isdnsmasq-base
. The package list has been adjusted and a task added to remove thednsmasq
package and purge the related configuration files from all LXC hosts.
Based on documentation from RabbitMQ [ https://www.rabbitmq.com/which-erlang.html ] this change ensures the version of erlang we’re using across distros is consistent and supported by RabbitMQ.
Mysql cnf files can now be properly overridden. The config_template module has been extended to support valueless options, such as those that are found in the my.cnf file(i.e. quick under the mysqldump section). To use valueless options, use the ignore_none_type attribute of the config_template module.
Metal hosts were being inserted into the
lxc_hosts
group, even if they had no containers (Bug 1660996). This is now corrected for newly configured hosts. In addition, any hosts that did not belong inlxc_hosts
will be removed on the next inventory run or playbook call.
The openstack service uri protocol variables were not being used to set the Trove specific uris. This resulted in ‘http’ always being used for the public, admin and internal uris even when ‘https’ was intended.
The sysctl configuration task was not skipping configurations where
enabled
was set tono
. Instead, it was removing configurations whenenabled: no
was set.There is now a fix in place that ensures any sysctl configuration with
enabled: no
will be skipped and the configuration will be left unaltered on the system.
Other Notes¶
openSUSE Leap 42.X support is still work in progress and not fully tested besides basic coverange in the OpenStack CI and individual manual testing. Even though backporting fixes to the Pike release will be done on best effort basis, it’s advised to use the master branch when working on openSUSE hosts.
The inventory generation code has been switched to use standard Python packaging tools. For most, this should not be a visible change. However, running the dynamic inventory script on a local development environment should now be called via
python dynamic_inventory.py
.
From now on, external repo management (in use for RDO/UCA for example) will be done inside the pip-install role, not in the repo_build role.