Queens Series Release Notes¶
17.1.11¶
New Features¶
The list of enabled filters for the Cinder scheduler, scheduler_default_filters in cinder.conf, could previously be defined only via an entry in
cinder_cinder_conf_overrides
. You now have the option to instead define a list variable,cinder_scheduler_default_filters
, that defines the enabled filters. This is helpful if you either want to disable one of the filters enabled by default (at the time of writing, these are AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter), or if conversely you want to add a filter that is normally not enabled, such as DifferentBackendFilter or InstanceLocalityFilter.For example, to enable the InstanceLocalityFilter in addition to the normally enabled scheduler filters, use the following variable.
cinder_scheduler_default_filters: - AvailabilityZoneFilter - CapacityFilter - CapabilitiesFilter - InstanceLocalityFilter
17.1.9¶
Known Issues¶
The number of inotify watch instances available is limited system wide via a sysctl setting. It is possible for certain processes, such as pypi-server, or elasticsearch from the ops repo to consume a large number of inotify watches. If the system wide maximum is reached then any process on the host or in any container on the host will be unable to create a new inotify watch. Systemd uses inotify watches, and if there are none available it is unable to restart services. The processes which synchronise the repo server contents between infra nodes also relies on inotify watches. If the repo servers fail to synchronise, or services fail to restart when expected check the the inotify watch limit which is defined in the sysctl value fs.inotify.max_user_watches. Patches have merged to increase these limits, but for existing environments or those which have not upgraded to a recent enough point release may have to apply an increased limit manually.
17.1.8¶
New Features¶
It is now possible to modify the NTP server options in chrony using
security_ntp_server_options
.
Chrony got a new configuration option to synchronize the system clock back to the RTC using the
security_ntp_sync_rtc
variable. Disabled by default.
Deprecation Notes¶
The following variable name changes have been implemented in order to better reflect their purpose.
lxc_host_machine_quota_disabled
->lxc_host_btrfs_quota_disabled
lxc_host_machine_qgroup_space_limit
->lxc_host_btrfs_qgroup_space_limit
lxc_host_machine_qgroup_compression_limit
->lxc_host_btrfs_qgroup_compression_limit
Bug Fixes¶
When using LXC containers with a copy-on-write back-end, the
lxc_hosts
role execution would fail due to undefined variables with thenspawn_host_
prefix. This issue has now been fixed.
17.1.7¶
Upgrade Notes¶
During an upgrade using the run-upgrade script, the neutron agents will now automatically be migrated from the neutron_agents containers on to the network_hosts. The neutron_agents containers will be deleted as they are no longer necessary. Any environments which previously upgraded to Queens can make use of the same playbooks to handle to migration, or inspect the playbooks to determine how to do it by hand if preferred.
Bug Fixes¶
With the release of CentOS 7.6, deployments were breaking and becoming very slow when we restart dbus in order to catch some PolicyKit changes. However, those changes were never actaully used so they were happening for no reason. We no longer make any modifications to the systemd-machined configuration and/or PolicyKit to maintain upstream compatibility.
17.1.6¶
New Features¶
You can now set the Libvirt CPU model and feature flags from the appropriate entry under the
nova_virt_types
dictionary variable (normallykvm
).nova_cpu_model
is a string value that sets the CPU model; this value is ignored if you set anynova_cpu_mode
other thancustom
.nova_cpu_model_extra_flags
is a list that allows you to specify extra CPU feature flags not normally passed through withhost-model
, or thecustom
CPU model of your choice.
Upgrade Notes¶
If your configuration previously set the
libvirt/cpu_model
and/orlibvirt/cpu_model_extra_flags
variables in anova_nova_conf_overrides
dictionary, you should consider moving those tonova_cpu_model
andnova_cpu_model_extra_flags
in the appropriate entry (normallykvm
) in thenova_virt_types
dictionary.
17.1.5¶
New Features¶
This role now optionally enables your compute nodes’ KVM kernel module nested virtualization capabilities, by setting nova_nested_virt_enabled to true. Depending on your distribution and libvirt version, you might need to set additional variables to fully enabled nested virtualization. For details, please see https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#nested-guest-support.
17.1.4¶
New Features¶
Horizon has, since OSA’s inception, been deployed with HTTPS access enabled, and has had no way to turn it off. Some use-cases may want to access via HTTP instead, so this patch enables the following.
Listen via HTTPS on a load balancer, but via HTTP on the horizon host and have the load balancer forward the correct headers. It will do this by default in the integrated build due to the presence of the load balancer, so the current behaviour is retained.
Enable HTTPS on the horizon host without a load balancer. This is the role’s default behaviour which matches what it always has been.
Disable HTTPS entirely by setting
haproxy_ssl: no
(which will also disable https on haproxy. This setting is inherited by the newhorizon_enable_ssl
variable by default. This is a new option.
17.1.1¶
New Features¶
The
os_horizon
role now supports distribution of user custom themes. Deployers can use the new keytheme_src_archive
ofhorizon_custom_themes
dictionary to provide absolute path to the archived theme. Only .tar.gz, .tgz, .zip, .tar.bz, .tar.bz2, .tbz, .tbz2 archives are supported. Structure inside archive should be as a standard theme, without any leading folders.
Octavia is creating vms, securitygroups, and other things in its project. In most cases the default quotas are not big enough. This will adjust them to (configurable) reasonable values.
Security Issues¶
Avoid setting the quotas too high for your cloud since this can impact the performance of other servcies and lead to a potential Denial-of-Service attack if Loadbalancer quotas are not set properly or RBAC is not properly set up.
Bug Fixes¶
Fixes bug https://bugs.launchpad.net/openstack-ansible/+bug/1778098 where playbook failed, if
horizon_custom_themes
is specified, and directory for theme is not provided
17.1.0¶
New Features¶
It is now possible to specify a list of tests for tempest to blacklist when executing using the
tempest_test_blacklist
list variable.
17.0.8¶
Deprecation Notes¶
The repo server’s reverse proxy for pypi has now been removed, leaving only the pypiserver to serve packages already on the repo server. The attempt to reverse proxy upstream pypi turned out to be very unstable with increased complexity for deployers using proxies or offline installs. With this, the variables
repo_nginx_pypi_upstream
andrepo_nginx_proxy_cache_path
have also been removed.
17.0.7¶
Bug Fixes¶
The conditional that determines whether the
sso_callback_template.html
file is deployed for federated deployments has been fixed.
17.0.6¶
New Features¶
The option
rabbitmq_erlang_version_spec
has been added allowing deployers to set the version of erlang used on a given installation.
Known Issues¶
With the release of CentOS 7.5, all pike releases are broken due to a mismatch in version between the libvirt-python library specified by the OpenStack community, and the version provided in CentOS 7.5. As such OSA is unable build the appropriate python library for libvirt. The only recourse for this is to upgrade the environment to the latest queens release.
Deprecation Notes¶
The use of the
apt_package_pinning
role as a meta dependency has been removed from the rabbitmq_server role. While the package pinning role is still used, it will now only be executed when the apt task file is executed.
The variable
nova_compute_pip_packages
is no longer used and has been removed.
Bug Fixes¶
In order to prevent further issues with a libvirt and python-libvirt version mismatch, KVM-based compute nodes will now use the distribution package python library for libvirt. This should resolve the issue seen with pike builds on CentOS 7.5.
17.0.5¶
New Features¶
Octavia requires SSL certificates for communication with the amphora. This adds the automatic creation of self signed certificates for this purpose. It uses different certificate authorities for amphora and control plane thus insuring maximum security.
Known Issues¶
All OSA releases earlier than 17.0.5, 16.0.4, and 15.1.22 will fail to build the rally venv due to the release of the new cmd2-0.9.0 python library. Deployers are encouraged to update to the latest OSA release which pins to an appropriate version which is compatible with python2.
Recently the spice-html5 git repository was entirely moved from
https://github.com/SPICE/spice-html5
tohttps://gitlab.freedesktop.org/spice/spice-html5
. This results in a failure in the git clone stage of therepo-build.yml
playbook for OSA queens releases earlier than17.0.5
. To fix the issue, deployers may upgrade to the most recent release, or may implement the following override inuser_variables.yml
.nova_spicehtml5_git_repo: https://gitlab.freedesktop.org/spice/spice-html5.git
Upgrade Notes¶
The distribution package lookup and data output has been removed from the py_pkgs lookup so that the repo-build use of py_pkgs has reduced output and the lookup is purpose specific for python packages only.
Security Issues¶
It is recommended that the certificate generation is always reviewed by security professionals since algorithms and key-lengths considered secure change all the time.
Bug Fixes¶
Newer releases of CentOS ship a version of libnss that depends on the existance of /dev/random and /dev/urandom in the operating system in order to run. This causes a problem during the cache preparation process which runs inside chroot that does not contain this, resulting in errors with the following message:
error: Failed to initialise NSS library
This has been resolved by introducing a /dev/random and /dev/urandom inside the chroot-ed environment.
17.0.4¶
Known Issues¶
In the
lxc_hosts
role execution, we make use of the images produced on a daily basis by images.linuxcontainers.org. Recent changes in the way those images are produced have resulted in changes to the default/etc/resolve.conf
in that default image. As such, when executing the cache preparation it fails. For queens releases prior to 17.0.4 the workaround to get past the error is to add the following to the/etc/openstack_deploy/user_variables.yml
file.lxc_cache_prep_pre_commands: "rm -f /etc/resolv.conf || true" lxc_cache_prep_post_commands: "ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf -f"
17.0.3¶
New Features¶
When venvwithindex=True and ignorerequirements=True are both specified in tempest_git_install_fragments (as was previously the default), this results in tempest being installed from PyPI without any constraints being applied. This could result in the version of tempest being installed in the integrated build being different than the version being installed in the independent role tests. Going forward, we remove the tempest_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs tempest from PyPI, but with appropriate constraints applied.
This consolidates the amphora image tasks in a common file and adds a way to download an amphora image from an artefact storage over http(s). With the Octavia team providing test images the tests were modified to not build images any longer but download them.
Security Issues¶
It is commonly considered bad practice to downlaod random images from the Internet expecially the test images the Octavia team provides which could potentially include unpatched operating system packages - so for any production deploy adjust the download url to an artifact storage your organization controls. The system also does not authenticate the image (e.g. with an md5) so should only be used on networks your organization controls.
Other Notes¶
The internal variable
python_ceph_package
has been renamed topython_ceph_packages
and is now a list instead of a string. If you are using gnocchi with ceph and are using this internal variable in yourceph_extra_components
overrides, please update it topython_ceph_packages
.
17.0.2¶
New Features¶
Adds support for the horizon octavia-ui dashboard. The dashboard will be automatically enabled if any octavia hosts are defined. If both Neutron LBaaSv2 and Octavia are enabled, two Load Balancer panels will be visible in Horizon.
Added the ability to configure vendor data for Nova in order to be able to push things via the metadata service or config drive.
Enable networking-bgpvpn ml2 neutron driver to make
OpenDaylight SDN Controller
to support BGPVPN for external network connectivity. You can set theneutron_plugin_type
toml2.opendaylight
andneutron_plugin_base
toodl-router_v2
andbgpvpn
to enable BGPVPN on the OpenDaylight.
The default variable nova_default_schedule_zone was previously set by default to
nova
. This default has been removed to allow the default to be set by the nova code instead. Deployers wishing to maintain the default availability zone of nova must now set the variable as a user_variables.yml or group_vars override.
Upgrade Notes¶
When upgrading from pike to queens there are the following changes to the container/service setup.
All cinder container services are consolidated into a single
cinder_api_container
. The previously implementedcinder_scheduler_container
can be removed.A new
heat_api
container is created with all heat services running in it. The previously implementedheat_apis_container
andheat_engine_container
can be removed.The ironic conductor service has been consolidated into the
ironic_api_container
. The previously implementedironic_conductor_container
can be removed.All nova services are consolidated into the
nova_api_container
and the rest of the nova containers can be removed.All trove services have been consolidated into the
trove_api_container
. The previously implementedtrove_conductor_container
andtrove_taskmanager_container
can be removed.
Playbooks have been added to facilitate this process through automation. Please see the
Major upgrades
chapter in the Operations Guide.
17.0.1¶
Upgrade Notes¶
Users should purge the ‘ntp’ package from their hosts if ceph-ansible is enabled. ceph-ansible previously was configured to install ntp by default which conflicts with the OSA ansible-hardening role chrony service.
Bug Fixes¶
ceph-ansible is no longer configured to install NTP by default, which creates a conflict with OSA’s ansible-hardening role that is used to implement NTP using ‘chrony’.
17.0.0¶
New Features¶
A new variable has been added to allow a deployer to control the restart of containers from common-tasks/os-lxc-container-setup.yml. This new option is
lxc_container_allow_restarts
and has a default oftrue
. If a deployer wishes to disable the auto-restart functionality they can set this value tofalse
and automatic container restarts will be disabled. This is a complement to the same option already present in the lxc_container_create role. This option is useful to avoid uncoordinated restarts of galera or rabbitmq containers if the LXC container configuration changes in a way that requires a restart.
OpenStack-Ansible now supports the openSUSE Leap 42.X distributions mainly targeting the latest 42.3 release.
The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.
The galera cluster now supports cluster health checks over HTTP using port 9200. The new cluster check ensures a node is healthy by running a simple query against the wsrep sync status using monitoring user. This change will provide for a more robust cluster check ensuring we have the most fault tolerant galera cluster possible.
A typical OSA install will put the neutron and octavia queues on different vhosts thus preventing the event streamer from working While octavia is streaming to its own queue the consumer on the neutron side listens to the neutron queue. With a recent octavia enhancement a separate queue for the event streamer can be configured. This patch will set up the event streamer to post into the neutron queue using neutron’s credentials. Thus reaching the consumer on the neutron-lbaas side and allowing for streaming.
Generating and validating checksums for all files installed by packages is now disabled by default. The check causes delays in playbook runs and it can consume a significant amount of CPU and I/O resources. Deployers can re-enable the check by setting
security_check_package_checksums
toyes
.
Deployers of CentOS 7 environments can use the
openstack_hosts_enable_yum_fastestmirror
variable to enable or disable yum’s fastestmirror plugin. The default setting ofyes
ensures that fastestmirror is enabled.
New hypervisor groups have been added allowing deployers to better define their compute workloads. While the generic “compute_hosts” group will still work explicit definitions for compute hosts can now be defined using the ironic-compute_hosts, kvm-compute_hosts, lxd-compute_hosts, qemu-compute_hosts, and powervm-compute_hosts groups accordingly
An option has been added allowing the user to define the user_group LBaaSv2 uses. The new option is
neutron_lbaasv2_user_group
and is set within the OS specific value by default.
The maximum amount of time to wait until forcibly failing the LXC cache preparation process is now configurable using the
lxc_cache_prep_timeout
variable. The value is specified in seconds, with the default being 20 minutes.
A new variable has been added which allows deployers to set the container technology OSA will use when running a deployment in containers. This new variable is
container_tech
which has a default value of “lxc”.
The
lxcbr0
bridge now allows NetworkManager to control it, which allows for networks to start in the correct order when the system boots. In addition, theNetworkManager-wait-online.service
is enabled to ensure that all services that require networking to function, such askeepalived
, will only start when network configuration is complete. These changes are only applied if a deployer is actively using NetworkManager in their environment.
Neutron connectivity agents will now be deployed on baremetal within the “network_hosts” defined within the
openstack_user_config.yml
.
Galera healthcheck has been improved, and relies on an xinetd service. By default, the service is unaccessible (filtered with the no_access directive). You can override the directive by setting any xinetd valid value to
galera_monitoring_allowed_source
.
HAProxy services that use backend nodes that are not in the Ansible inventory can now be specified manually by setting
haproxy_backend_nodes
to a list ofname
andip_addr
settings.
Open vSwitch dataplane with NSH support has been implemented. This feature may be activated by setting
ovs_nsh_support: True
in/etc/openstack_deploy/user_variables.yml
.
A new variable,
tempest_roles
, has been added to the os_tempest role allowing users to define keystone roles to be during tempest testing.
The
security_sshd_permit_root_login
setting can now be set to change thePermitRootLogin
setting in/etc/ssh/sshd_config
to any of the possible options. Setsecurity_sshd_permit_root_login
to one ofwithout-password
,prohibit-password
,forced-commands-only
,yes
orno
.
Persistent systemd journals are now enabled. This allows deployers to keep older systemd journals on disk for review. The disk space requirements are extremely low since the journals are stored in binary format. The default location for persistent journals is in
/var/log/journal
.Deployers can opt out of this change by setting
openstack_host_keep_journals
tono
.
The extra packages percona packages used by the ppc64le are now downloaded by the Ansible deployment host by default, as opposed to the target hosts. Once downloaded the packages are pushed up to the target hosts. This behaviour may be adjusted by setting
galera_server_extra_package_downloader
totarget-host
. The packages are downloaded to the path set ingalera_server_extra_package_path
.
The repo server now implements nginx as a reverse proxy for python packages sourced from pypi. The initial query will be to a local deployment of pypiserver in order to serve any locally built packages, but if the package is not available locally it will retry the query against the upstream pypi mirror set in the variable
repo_nginx_pypi_upstream
(defaults to pypi) and cache the response.
Deployers can set a refresh interval for haproxy’s stats page by setting the
haproxy_stats_refresh_interval
variable. The default value is60
, which causes haproxy to refresh the stats page every 60 seconds.
The
tempest_images
data structure for theos_tempest
role now expects the values for each image to includename
(optionally) andformat
(the disk format). Also, the optional variablechecksum
may be used to set the checksum expected for the file in the format<algorithm>:<checksum>
.
The default location for the image downloads in the
os_tempest
role set by thetempest_image_dir
variable has now been changed to be/opt/cache/files
in order to match the default location in nodepool. This improves the reliability of CI testing in OpenStack CI as it will find the file already cached there.
A new variable has been introduced into the
os_tempest
role namedtempest_image_downloader
. When set todeployment-host
(which is the default) it uses the deployment host to handle the download of images to be used for tempest testing. The images are then uploaded to the target host for uploading into Glance.
The tasks within the ansible-hardening role are now based on Version 1, Release 3 of the Red Hat Enteprise Linux Security Technical Implementation Guide.
The
sysctl
parameterkernel.randomize_va_space
is now set to2
by default. This matches the default of most modern Linux distributions and it ensures that Address Space Layout Randomization (ASLR) is enabled.
The Datagram Congestion Control Protocol (DCCP) kernel module is now disabled by default, but a reboot is required to make the change effective.
An option to disable the
machinectl
quota system has been changed. The variablelxc_host_machine_quota_disabled
is a Boolean with a default of false. When this option is set to true it will disable themachinectl
quota system.
The options
lxc_host_machine_qgroup_space_limit
andlxc_host_machine_qgroup_compression_limit
have been added allowing a deployer to set qgroup limits as they see fit. The default value for these options is “none” which is effectively unlimited. These options accept any nominal size value followed by the single letter type, example64G
. These options are only effective when the optionlxc_host_machine_quota_disabled
is set to false.
Enable Kernel Shared Memory support by setting
nova_compute_ksm_enabled
toTrue
.
When using Glance and NFS the NFS mount point will now be managed using a systemd mount unit file. This change ensures the deployment of glance is not making potentially system impacting changes to the
/etc/fstab
and modernizes how we deploy glance when using shared storage.
New variables have been added to the glance role allowing a deployer to set the UID and GID of the glance user. The new options are,
glance_system_user_uid
andglance_system_group_uid
. These options are useful when deploying glance with shared storage as the back-end for images and will only set the UID and GID of the glance user when defined.
Searching for world-writable files is now disabled by default. The search causes delays in playbook runs and it can consume a significant amount of CPU and I/O resources. Deployers can re-enable the search by setting
security_find_world_writable_dirs
toyes
.
Known Issues¶
Ceph storage backend is known not to work on openSUSE Leap 42.X yet. This is due to missing openSUSE support in the upstream Ceph Ansible playbooks.
Upgrade Notes¶
The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.
The ceph-ansible common roles are no longer namespaced with a galaxy-style ‘.’ (ie.
ceph.ceph-common
is now cloned asceph-common
), due to a change in the way upstream meta dependencies are handled in the ceph roles. The roles will be cloned according to the new naming, and an upgrade playbookceph-galaxy-removal.yml
has been added to clean up the stale galaxy-named roles.
The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.
KSM configuration is changed to disabled by default on Ubuntu. If you overcommit the RAM on your hypervisor it’s a good idea to set
nova_compute_ksm_enabled
toTrue
.
The glance v1 API is now disabled by default as the API is scheduled to be removed in Queens.
The glance registry service is now disabled by default as it is not required for the v2 API and is scheduled to be removed in the future. The service can be enabled by setting
glance_enable_v2_registry
toTrue
.
When upgrading there is nothing a deployer must immediately do to run neutron agent services on hosts within the
network_hosts
group. Simply executing the playbooks will deploy the neutron servers on the baremetal machines and will leave all existing agents containers alone.
It is recommended for deployers to clean up the neutron_agents container(s) after an upgrade is complete and the cluster has been verified as stable. This can be done by simply disabling the neutron agents running in the neutron_agent container(s), re-balancing the agent services targeting the new baremetal agents, deleting the container(s), and finally removing the container(s) from inventory.
Default quotas were bumped for the following resources: networks (from 10 to 100), subnets (from 10 to 100), ports (from 50 to 500) to match upstream defaults.
Any tooling using the Designate v1 API needs to be reworked to use the v2 API
If you have overriden your
openstack_host_specific_kernel_modules
, please remove its group matching, and move that override directly to the appropriate group.Example, for an override like:
- name: "ebtables" pattern: "CONFIG_BRIDGE_NF_EBTABLES" group: "network_hosts"
You can create a file for the network_host group, inside its group vars folder
/etc/openstack_deploy/group_vars/network_hosts
, with the content:- name: "ebtables" pattern: "CONFIG_BRIDGE_NF_EBTABLES"
Any user that is coming from Pike or below on Ubuntu should modify its
user_external_repos_list
, switching its ubuntu cloud archive repository fromstate: present
tostate: absent
. From now on, UCA will be defined with the filenameuca
. If the deployer wants to use its mirror, he can still override the variableuca_repo
to point to its mirror. Alternatively, the deployer can completely define which repos to add and remove, ignoring our defaults, by overridingopenstack_hosts_package_repos
.
Deprecation Notes¶
The
galera_percona_xtrabackup_repo_url
variable which was used on Ubuntu distributions to select the upstream Percona repository has been dropped and the default upstream repository is always used from now on.
The variables
keystone_memcached_servers
andkeystone_cache_backend_argument
have been deprecated in favor ofkeystone_cache_servers
, a list of servers for caching purposes.
In OSA deployments prior to Queens, if
repo_git_cache_dir
was set to a folder which existed on a repo container host then that folder would be symlinked to the repo container bind mount instead of synchronising its contents to the repo container. This functionality is deprecated in Queens and will be removed in Rocky. The ability to make use of the git cache still exists, but the folder contents will be synchronised from the deploy host to the repo container. If you have made use of the symlink functionality previously, please move the contents to a standard folder and remove the symlink.
The Ceilometer API is no longer available in the Queens release of OpenStack, this patch removes all references to API related configurations as they are no longer needed.
The
galera_client_opensuse_mirror_obs_url
variable has been removed since the OBS repository is no longer used to install the MariaDB packages.
The
glance_enable_v1_registry
variable has been removed. When using the glance v1 API the registry service is required, so having a variable to disable it makes little sense. The service is now enabled/disabled for the v1 API using theglance_enable_v1_api
variable.
The nova_placement database which was implemented in the ocata release of OpenStack-Ansible was never actually used for anything due to reverts in the upstream code. The database should be empty and can be deleted. With this the following variables also no longer have any function and have been removed.
nova_placement_galera_user
nova_placement_galera_database
nova_placement_db_max_overflow
nova_placement_db_max_pool_size
nova_placement_db_pool_timeout
The following variables have been removed as they no longer serve any purpose.
galera_package_arch
percona_package_download_validate_certs
percona_package_url
percona_package_fallback_url
percona_package_sha256
percona_package_path
qpress_package_download_validate_certs
qpress_package_url
qpress_package_fallback_url
qpress_package_sha256
qpress_package_path
The functionality previously using these variables has been transitioned to using a simpler data structure.
The following variables have been removed from the
os_tempest
role to simplify it. They have been replaced through the use of the data structuretempest_images
which now has equivalent variables per image. - cirros_version - tempest_img_url - tempest_image_file - tempest_img_disk_format - tempest_img_name - tempest_images.sha256 (replaced by checksum)
Critical Issues¶
The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.
The Designate V1 API has been removed, and cannot be enabled.
Security Issues¶
The PermitRootLogin in sshd_config changed from ‘yes’ to ‘prohibit-password’ in the containers. By default there is no password set in the containers but the ssh pub key from the deployment host is injected in the targets nodes authorized_keys.
The following headers were added as additional default (and static) values. X-Content-Type-Options nosniff, X-XSS-Protection “1; mode=block”, and Content-Security-Policy “default-src ‘self’ https: wss:;”. Additionally, the X-Frame-Options DENY header was added, defaulting to DENY. You may override the header via the keystone_x_frame_options variable.
Since we use neutron’s credentials to access the queue, security conscious people might want to set up an extra user for octavia on the neutron queue restricted to the topics octavia posts to.
Bug Fixes¶
When the
glance_enable_v2_registry
variable is set toTrue
the correspondingdata_api
setting is now correctly set. Previously it was not set and therefore the API service was not correctly informed that the registry was operating.
The
os_tempest
tempest role was downloading images twice - once arbitrarily, and once to use for testing. This has been consolidated into a single download to a consistent location.
SELinux policy for neutron on CentOS 7 is now provided to fix SELinux AVCs that occur when neutron’s agents attempt to start daemons such as haproxy and dnsmasq.
Other Notes¶
openSUSE Leap 42.X support is still work in progress and not fully tested besides basic coverage in the OpenStack CI and individual manual testing. Even though backporting fixes to the Pike release will be done on best effort basis, it’s advised to use the master branch when working on openSUSE hosts.
CentOS deployments require a special COPR repository for modern LXC packages. The COPR repository is not mirrored at this time and this causes failed gate tests and production deployments.
The role now syncs the LXC packages down from COPR to each host and builds a local LXC package repository in /opt/thm-lxc2.0. This greatly reduces the amount of times that packages must be downloaded from the COPR server during deployments, which will reduce failures until the packages can be hosted with a more reliable source.
In addition, this should speed up playbook runs since
yum
can check a locally-hosted repository instead of a remote repository with availability and performance challenges.
Added support for specifying GID and UID for cinder system user by defining
cinder_system_user_uid
andcinder_system_group_gid
. This setting is optional.
The variables
nova_scheduler_use_baremetal_filters
andnova_metadata_host
have been removed, matching upstream nova changes. Thenova_virt_types
dict no longer needs thenova_scheduler_use_baremetal_filters
andnova_firewall_driver
keys as well.
The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‘!$HOSTNAME’ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.
The use_neutron option was marked to be removed in sahara.
The vars plugin
override_folder.py
has been removed. With the move to Ansible 2.4 [”https://review.openstack.org/#/c/522778”] this plugin is no longer required. The functionality this plugin provided has been replaced with the native Ansible inventory plugin.