Train Series Release Notes¶
20.6.1-41¶
보안 이슈¶
A vulnerability in the console proxies (novnc, serial, spice) that allowed open redirection has been patched. The novnc, serial, and spice console proxies are implemented as websockify servers and the request handler inherits from the python standard SimpleHTTPRequestHandler. There is a known issue in the SimpleHTTPRequestHandler which allows open redirects by way of URLs in the following format:
http://vncproxy.my.domain.com//example.com/%2F..
which if visited, will redirect a user to example.com.
The novnc, serial, and spice console proxies will now reject requests that pass a redirection URL beginning with “//” with a 400 Bad Request.
버그 수정¶
Improved detection of anti-affinity policy violation when performing live and cold migrations. Most of the violations caused by race conditions due to performing concurrent live or cold migrations should now be addressed by extra checks in the compute service. Upon detection, cold migration operations are automatically rescheduled, while live migrations have two checks and will be rescheduled if detected by the first one, otherwise the live migration will fail cleanly and revert the instance state back to its previous value.
Fixes slow compute restart when using the
nova.virt.ironic
compute driver where the driver was previously attempting to attach VIFS on start-up via theplug_vifs
driver method. This method has grown otherwise unused since the introduction of theattach_interface
method of attaching VIFs. As Ironic manages the attachment of VIFs to baremetal nodes in order to align with the security requirements of a physical baremetal node’s lifecycle. The ironic driver now ignores calls to theplug_vifs
method.
If compute service is down in source node and user try to stop instance, instance gets stuck at powering-off, hence evacuation fails with msg: Cannot ‘evacuate’ instance <instance-id> while it is in task_state powering-off. It is now possible for evacuation to ignore the vm task state. For more details see: bug 1978983
Minimizes a race condition window when using the
ironic
virt driver where the data generated for the Resource Tracker may attempt to compare potentially stale instance information with the latest known baremetal node information. While this doesn’t completely prevent nor resolve the underlying race condition identified in bug 1841481, this change allows Nova to have the latest state information, as opposed to state information which may be out of date due to the time which it may take to retrieve the status from Ironic. This issue was most observable on baremetal clusters with several thousand physical nodes.
In the Rocky (18.0.0) release support was added to nova to use neutron’s multiple port binding feature when the binding-extended API extension is available. In the Train (20.0.0) release the SR-IOV live migration feature broke the semantics of the vifs field in the
migration_data
object that signals if the new multiple port binding workflow should be used by always populating it even when thebinding-extended
API extension is not present. This broke live migration for any deployment that did not support the optionalbinding-extended
API extension. The Rocky behavior has now been restored enabling live migration using the single port binding workflow when multiple port bindings are not available.
기타 기능¶
Nova now has a config option called
[workarounds]/never_download_image_if_on_rbd
which helps to avoid pathological storage behavior with multiple ceph clusters. Currently, Nova does not support multiple ceph clusters properly, but Glance can be configured with them. If an instance is booted from an image residing in a ceph cluster other than the one Nova knows about, it will silently download it from Glance and re-upload the image to the local ceph privately for that instance. Unlike the behavior you expect when configuring Nova and Glance for ceph, Nova will continue to do this over and over for the same image when subsequent instances are booted, consuming a large amount of storage unexpectedly. The new workaround option will cause Nova to refuse to do this download/upload behavior and instead fail the instance boot. It is simply a stop-gap effort to allow unsupported deployments with multiple ceph clusters from silently consuming large amounts of disk space.
20.6.1¶
버그 수정¶
Add support for the
hw:hide_hypervisor_id
extra spec. This is an alias for thehide_hypervisor_id
extra spec, which was not compatible with theAggregateInstanceExtraSpecsFilter
scheduler filter. See bug 1841932 for more details.
20.5.0¶
Upgrade Notes¶
The default for
[glance] num_retries
has changed from0
to3
. The option controls how many times to retry a Glance API call in response to a HTTP connection failure. When deploying Glance behind HAproxy it is possible for a response to arrive just after the HAproxy idle time. As a result, an exception will be raised when the connection is closed resulting in a failed request. By increasing the default value, Nova can be more resilient to this scenario were HAproxy is misconfigured by retrying the request.
버그 수정¶
Fixes bug 1892361 in which the pci stat pools are not updated when an existing device is enabled with SRIOV capability. Restart of nova-compute service updates the pci device type from type-PCI to type-PF but the pools still maintain the device type as type-PCI. And so the PF is considered for allocation to instance that requests vnic_type=direct. With this fix, the pci device type updates are detected and the pci stat pools are updated properly.
기타 기능¶
The nova libvirt virt driver supports creating instances with multi-queue virtio network interfaces. In previous releases nova has based the maximum number of virtio queue pairs that can be allocated on the reported kernel major version. It has been reported in bug #1847367 that some distros have backported changes from later major versions that make major version number no longer suitable to determine the maximum virtio queue pair count. A new config option has been added to the libvirt section of the nova.conf. When defined nova will now use the
[libvirt]/max_queues
option to define the max queues that can be configured, if undefined it will fallback to the previous kernel version approach.
20.4.1¶
버그 수정¶
Since Libvirt v.1.12.0 and the introduction of the libvirt issue , there is a fact that if we set cache mode whose write semantic is not O_DIRECT (i.e. “unsafe”, “writeback” or “writethrough”), there will be a problem with the volume drivers (i.e. LibvirtISCSIVolumeDriver, LibvirtNFSVolumeDriver and so on), which designate native io explicitly.
When the driver_cache (default is none) has been configured as neither “none” nor “directsync”, the libvirt driver will ensure the driver_io to be “threads” to avoid an instance spawning failure.
An issue that could result in instances with the
isolate
thread policy (hw:cpu_thread_policy=isolate
) being scheduled to hosts with SMT (HyperThreading) and consumingVCPU
instead ofPCPU
has been resolved. See bug #1889633 for more information.
Addressed an issue that prevented instances using multiqueue feature from being created successfully when their vif_type is TAP.
Resolved an issue whereby providing an empty list for the
policies
field in the request body of thePOST /os-server-groups
API would result in a server error. This only affects the 2.1 to 2.63 microversions, as the 2.64 microversion replaces thepolicies
list field with apolicy
string field. See bug #1894966 for more information.
20.4.0¶
버그 수정¶
Previously, attempting to configure an instance with the
e1000e
or legacyVirtualE1000e
VIF types on a host using the QEMU/KVM driver would result in an incorrectUnsupportedHardware
exception. These interfaces are now correctly marked as supported.
20.3.0¶
버그 수정¶
A new
[workarounds]/reserve_disk_resource_for_image_cache
config option was added to fix the bug 1878024 where the images in the compute image cache overallocate the local disk. If this new config is set then the libvirt driver will reserve DISK_GB resources in placement based on the actual disk usage of the image cache.
20.2.0¶
버그 수정¶
This release contains a fix for a regression introduced in 15.0.0 (Ocata) where server create failing during scheduling would not result in an instance action record being created in the cell0 database. Now when creating a server fails during scheduling and is “buried” in cell0 a
create
action will be created with an event namedconductor_schedule_and_build_instances
.
This release contains a fix for bug 1856925 such that
resize
andmigrate
server actions will be rejected with a 409HTTPConflict
response if the source compute service is down.
The Compute service has never supported direct booting of an instance from an image that was created by the Block Storage service from an encrypted volume. Previously, this operation would result in an ACTIVE instance that was unusable. Beginning with this release, an attempt to boot from such an image will result in the Compute API returning a 400 (Bad Request) response.
A new config option
[neutron]http_retries
is added which defaults to 3. It controls how many times to retry a Neutron API call in response to a HTTP connection failure. An example scenario where it will help is when a deployment is using HAProxy and connections get closed after idle time. If an incoming request tries to re-use a connection that is simultaneously being torn down, a HTTP connection failure will occur and previously Nova would fail the entire request. With retries, Nova can be more resilient in this scenario and continue the request if a retry succeeds. Refer to https://launchpad.net/bugs/1866937 for more details.
20.1.1¶
Upgrade Notes¶
Upgrading to Train on a deployment with a large database may hit bug 1862205, which results in instance records left in a bad state, and manifests as instances not being shown in list operations. Users upgrading to Train for the first time will definitely want to apply a version which includes this fix. Users already on Train should upgrade to a version including this fix to ensure the problem is addressed.
버그 수정¶
A fix for serious bug 1862205 is provided which addresses both the performance aspect of schema migration 399, as well as the potential fallout for cases where this migration silently fails and leaves large numbers of instances hidden from view from the API.
20.1.0¶
버그 수정¶
Bug 1845986 has been fixed by adding iommu driver when the following metadata options are used with AMD SEV:
hw_scsi_model=virtio-scsi
and eitherhw_disk_bus=scsi
orhw_cdrom_bus=scsi
hw_video_model=virtio
Also a virtio-serial controller is created when
hw_qemu_guest_agent=yes
option is used, together with iommu driver for it.
The
DELETE /os-services/{service_id}
compute API will now return a409 HTTPConflict
response when trying to delete anova-compute
service which is involved in in-progress migrations. This is because doing so would not only orphan the compute node resource provider in the placement service on which those instances have resource allocations but can also break the ability to confirm/revert a pending resize properly. See https://bugs.launchpad.net/nova/+bug/1852610 for more details.
An instance can be rebuilt in-place with the original image or a new image. Instance resource usage cannot be altered during a rebuild. Previously Nova would have ignored the NUMA topology of the new image continuing to use the NUMA topology of the existing instance until a move operation was performed. As Nova did not explicitly guard against inadvertent changes to resource requests contained in a new image, it was possible to rebuild with an image that would violate this requirement; see bug #1763766 for details. This resulted in an inconsistent state as the instance that was running did not match the instance that was requested. Nova now explicitly checks if a rebuild would alter the requested NUMA topology of an instance and rejects the rebuild if so.
With the changes introduced to address bug #1763766, Nova now guards against NUMA constraint changes on rebuild. As a result the
NUMATopologyFilter
is no longer required to run on rebuild since we already know the topology will not change and therefore the existing resource claim is still valid. As such it is now possible to do an in-place rebuild of an instance with a NUMA topology even if the image changes provided the new image does not alter the topology which addresses bug #1804502.
20.0.0¶
Prelude¶
The 20.0.0 release includes many new features and bug fixes. Please be sure to read the upgrade section which describes the required actions to upgrade your cloud from 19.0.0 (Stein) to 20.0.0 (Train).
There are a few major changes worth mentioning. This is not an exhaustive list:
The latest Compute API microversion supported for Train is v2.79. Details on REST API microversions added since the 19.0.0 Stein release can be found in the REST API Version History page.
Live migration support for servers with a NUMA topology, pinned CPUs and/or huge pages, when using the libvirt compute driver.
Live migration support for servers with SR-IOV ports attached when using the libvirt compute driver.
Support for cold migrating and resizing servers with bandwidth-aware Quality of Service ports attached.
Improvements to the scheduler for more intelligently filtering results from the Placement service.
Improved multi-cell resilience with the ability to count quota usage using the Placement service and API database.
A new framework supporting hardware-based encryption of guest memory to protect users against attackers or rogue administrators snooping on their workloads when using the libvirt compute driver. Currently only has basic support for AMD SEV (Secure Encrypted Virtualization).
Improved operational tooling for things like archiving the database and healing instance resource allocations in Placement.
Improved coordination with the baremetal service during external node power cycles.
Support for VPMEM (Virtual Persistent Memory) when using the libvirt compute driver. This provides data persistence across power cycles at a lower cost and with much larger capacities than DRAM, especially benefitting HPC and memory databases such as redis, rocksdb, oracle, SAP HANA, and Aerospike.
It is now possible to place CPU pinned and unpinned servers on the same compute host when using the libvirt compute driver. See the admin guide for details.
Nova no longer includes Placement code. You must use the extracted Placement service. See the Placement extraction upgrade instructions for details.
The XenAPI virt driver is now deprecated and may be removed in a future release as its quality can not be ensured due to lack of maintainers.
The
nova-consoleauth
service has been removed as it was deprecated since the 18.0.0 (Rocky) release.The deprecated
Cells V1
feature (not to be confused with Cells V2) has been removed.
새로운 기능¶
API microversion 2.74 adds support for specifying optional
host
and/orhypervisor_hostname
parameters in the request body ofPOST /servers
. These request a specific destination host/node to boot the requested server. These parameters are mutually exclusive with the specialavailability_zone
format ofzone:host:node
. Unlikezone:host:node
, thehost
and/orhypervisor_hostname
parameters still allow scheduler filters to be run. If the requested host/node is unavailable or otherwise unsuitable, earlier failure will be raised. There will be also a new policy namedcompute:servers:create:requested_destination
. By default, it can be specified by administrators only.
Microversion 2.78 adds a new
topology
sub-resource to the servers API:GET /servers/{server_id}/topology
This API provides information about the NUMA topology of a server, including instance to host CPU pin mappings, if CPU pinning is used, and pagesize information.
The information exposed by this API is admin or owner only by default, controlled by rule:
compute:server:topology:index
And following fine control policy use to keep host only information to admin:
compute:server:topology:host:index
The libvirt driver now supports booting instances with virtual persistent memory (vPMEM), also called persistent memory (PMEM) namespaces. To enable vPMEM support, the user should specify the PMEM namespaces in the
nova.conf
by using the configuration option[libvirt]/pmem_namespaces
. For example:[libvirt] # pmem_namespaces=$LABEL:$NSNAME[|$NSNAME][,$LABEL:$NSNAME[|$NSNAME]] pmem_namespaces = 128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7
Only PMEM namespaces listed in the configuration file can be used by instances. To identify the available PMEM namespaces on the host or create new namespaces, the
ndctl
utility can be used:ndctl create-namespace -m devdax -s $SIZE -M mem -n $NSNAME
Nova will invoke this utility to identify available PMEM namespaces. Then users can specify vPMEM resources in a flavor by adding flavor’s extra specs:
openstack flavor set --property hw:pmem=6GB,64GB <flavor-id>
When
enable_dhcp
is set on a subnet but there is no DHCP port on neutron then thedhcp_server
value in meta hash will contain the subnet gateway IP instead of being absent.
Multiple API cleanups is done in API microversion 2.75:
400 for unknown param for query param and for request body.
Making server representation always consistent among GET, PUT and Rebuild serevr APIs response.
PUT /servers/{server_id}
andPOST /servers/{server_id}/action {rebuild}
API response is modified to add all the missing fields which are return byGET /servers/{server_id}
.Change the default return value of swap field from the empty string to 0 (integer) in flavor APIs.
Return
servers
field always in the response of GET hypervisors API even there are no servers on hypervisor.
Support for archiving deleted rows from the database across all cells has been added to the
nova-manage db archive_deleted_rows
command. Specify the--all-cells
option to run the process across all existing cells. It is only possible to archive all DBs from a node where the[api_database]/connection
option is configured.
Added a new
locked_reason
option in microversion 2.73 to thePOST /servers/{server_id}/action
request where the action is lock. It enables the user to specify a reason when locking a server. This information will be exposed through the response of the following APIs:GET servers/{server_id}
GET /servers/detail
POST /servers/{server_id}/action
where the action is rebuildPUT servers/{server_id}
In addition,
locked
will be supported as a valid filter/sort parameter forGET /servers/detail
andGET /servers
so that users can filter servers based on their locked value. Also the instance action versioned notifications for the lock/unlock actions now contain thelocked_reason
field.
The libvirt driver can now support requests for guest RAM to be encrypted at the hardware level, if there are compute hosts which support it. Currently only AMD SEV (Secure Encrypted Virtualization) is supported, and it has certain minimum version requirements regarding the kernel, QEMU, and libvirt.
Memory encryption can be required either via a flavor which has the
hw:mem_encryption
extra spec set toTrue
, or via an image which has thehw_mem_encryption
property set toTrue
. These do not inherently cause a preference for SEV-capable hardware, but for now SEV is the only way of fulfilling the requirement. However in the future, support for other hardware-level guest memory encryption technology such as Intel MKTME may be added. If a guest specifically needs to be booted using SEV rather than any other memory encryption technology, it is possible to ensure this by addingtrait:HW_CPU_X86_AMD_SEV=required
to the flavor extra specs or image properties.In all cases, SEV instances can only be booted from images which have the
hw_firmware_type
property set touefi
, and only when the machine type is set toq35
. The latter can be set per image by setting the image propertyhw_machine_type=q35
, or per compute node by the operator via thehw_machine_type
configuration option in the[libvirt]
section ofnova.conf
.For information on how to set up support for AMD SEV, please see the KVM section of the Configuration Guide.
It is now possible to signal and perform an update of an instance’s power state as of the 2.76 microversion using the
power-update
external event. Currently it is only supported in the ironic driver and through this event Ironic will send all “power-on to power-off” and “power-off to power-on” type power state changes on a physical instance to nova which will update its database accordingly. This way nova will not be able to enforce an incorrect power state on the physical instance during the periodic_sync_power_states
task. The changes to the power state of an instance caused by this event can be viewed throughGET /servers/{server_id}/os-instance-actions
andGET /servers/{server_id}/os-instance-actions/{request_id}
.
Blueprint placement-req-filter-forbidden-aggregates adds the ability for operators to set traits on aggregates which if not requested in flavor extra specs or image properties will result in disallowing all hosts belonging to those aggregates from booting the requested instances. This feature is enabled via a new config option
[scheduler]/enable_isolated_aggregate_filtering
. See Filtering hosts by isolated aggregates for more details.
Microversion 2.77 adds the optional parameter
availability_zone
to theunshelve
server action API.Specifying an availability zone is only allowed when the server status is
SHELVED_OFFLOADED
otherwise a 409 HTTPConflict response is returned.If the
[cinder]/cross_az_attach
configuration option is False then the specified availability zone has to be the same as the availability zone of any volumes attached to the shelved offloaded server, otherwise a 409 HTTPConflict error response is returned.
Microversion 2.79 adds support for specifying the
delete_on_termination
field in the request body when attaching a volume to a server, to support configuring whether to delete the data volume when the server is destroyed. Also,delete_on_termination
is added to the GET responses when showing attached volumes.The affected APIs are as follows:
POST /servers/{server_id}/os-volume_attachments
GET /servers/{server_id}/os-volume_attachments
GET /servers/{server_id}/os-volume_attachments/{volume_id}
Compute nodes using the libvirt driver can now report
PCPU
inventory. This is consumed by instances with dedicated (pinned) CPUs. This can be configured using the[compute] cpu_dedicated_set
config option. The scheduler will automatically translate the legacyhw:cpu_policy
flavor extra spec orhw_cpu_policy
image metadata property toPCPU
requests, falling back toVCPU
requests only if noPCPU
candidates are found. Refer to the help text of the[compute] cpu_dedicated_set
,[compute] cpu_shared_set
andvcpu_pin_set
config options for more information.
Compute nodes using the libvirt driver will now report the
HW_CPU_HYPERTHREADING
trait if the host has hyperthreading. The scheduler will automatically translate the legacyhw:cpu_thread_policy
flavor extra spec orhw_cpu_thread_policy
image metadata property to either require or forbid this trait.
A new configuration option,
[compute] cpu_dedicated_set
, has been added. This can be used to configure the host CPUs that should be used forPCPU
inventory.
A new configuration option,
[workarounds] disable_fallback_pcpu_query
, has been added. When creating or moving pinned instances, the scheduler will attempt to provide aPCPU
-based allocation, but can also fallback to a legacyVCPU
-based allocation. This fallback behavior is enabled by default to ensure it is possible to upgrade without having to modify compute node configuration but it results in an additional request for allocation candidates from placement. This can have a slight performance impact and is unnecessary on new or upgraded deployments where the compute nodes have been correctly configured to reportPCPU
inventory. The[workarounds] disable_fallback_pcpu_query
config option can be used to disable this fallback allocation candidate request, meaning onlyPCPU
-based allocation candidates will be retrieved.
In this release support was added for two additional libvirt video models:
gop
, the UEFI graphic output protocol device model; and thenone
device model. Existing support forvirtio
has been extended to all architectures and may now be requested via thehw_video_model
image metadata property. Prior to this release thevirtio
video model was unconditionally enabled forAARCH64
. This is unchanged but it can now be explicitly enabled on all supported architectures. Thenone
video model can be used to disable emulated video devices when using pGPU or vGPU passthrough.
The scheduler can now use placement to more efficiently query for hosts that support the disk_format of the image used in a given request. The
[scheduler]/query_placement_for_image_type_support
config option enables this behavior, but must not be turned on until all computes have been upgraded to this version and thus are exposing image type support traits.
It is now possible to specify an ordered list of CPU models in the
[libvirt] cpu_models
config option. If[libvirt] cpu_mode
is set tocustom
, the libvirt driver will select the first CPU model in this list that can provide the required feature traits.
The libvirt driver has been extended to support user configurable performance monitoring unit (vPMU) virtualization. This is particularly useful for real-time workloads. A pair of boolean flavor extra spec and image metadata properties
hw:pmu
andhw_pmu
have been added to control the emulation of the vPMU. By default the behavior of vPMU emulation has not been changed. To take advantage of this new feature, the operator or tenant will need to update their flavors or images to define the new property.
An option
--before
has been added to nova-manage db archive_deleted_rows command. This options limits archiving of records to those deleted before the specified date.
With the libvirt driver, live migration now works correctly for instances that have a NUMA topology. Previously, the instance was naively moved to the destination host, without updating any of the underlying NUMA guest to host mappings or the resource usage. With the new NUMA-aware live migration feature, if the instance cannot fit on the destination the live migration will be attempted on an alternate destination if the request is setup to have alternates. If the instance can fit on the destination, the NUMA guest to host mappings will be re-calculated to reflect its new host, and its resource usage updated.
A mandatory scheduling pre-filter has been added which will exclude disabled compute nodes where the related
nova-compute
service status is mirrored with aCOMPUTE_STATUS_DISABLED
trait on the compute node resource provider(s) for that service in Placement. See the admin scheduler configuration docs for details.
The Quobyte Nova volume driver now supports identifying Quobyte mounts via the mounts fstype field, which is used by Quobyte 2.x clients. The previous behaviour is deprecated and may be removed from the Quobyte clients in the future.
In this release SR-IOV live migration support is added to the libvirt virt driver for Neutron interfaces. Neutron SR-IOV interfaces can be grouped into two categories, direct mode interfaces and indirect. Direct mode SR-IOV interfaces are directly attached to the guest and exposed to the guest OS. Indirect mode SR-IOV interfaces have a software interface such as a macvtap between the guest and the SR-IOV device. This feature enables transparent live migration for instances with indirect mode SR-IOV devices. As there is no generic way to copy hardware state during a live migration, direct mode migration is not transparent to the guest. For direct mode interfaces, we mimic the workflow already in place for suspend and resume. For instance with SR-IOV devices, we detach the direct mode interfaces before migration and re-attach them after the migration. As a result, instances with direct mode SR-IOV port will lose network connectivity during a migration unless a bond with a live migratable interface is created within the guest.
Cold migration and resize are now supported for servers with neutron ports having resource requests. E.g. ports that have QoS minimum bandwidth rules attached. Note that the migration is only supported if both the source and the destination compute services are upgraded to Train and the
[upgrade_levels]/compute
configuration does not prevent the computes from using the latest RPC version.
알려진 이슈¶
The support for guest RAM encryption using AMD SEV (Secure Encrypted Virtualization) added in Train is incompatible with a number of image metadata options:
hw_scsi_model=virtio-scsi
and eitherhw_disk_bus=scsi
orhw_cdrom_bus=scsi
hw_video_model=virtio
hw_qemu_guest_agent=yes
When used together, the guest kernel can malfunction with repeated warnings like:
NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [system-udevd:272]
This will be resolved in a future patch release. For more information, refer to bug 1845986
Upgrade Notes¶
The
[DEFAULT]/block_device_allocate_retries
configuration option now has a minimum required value of 0. Any previous configuration with a value less than zero will now result in an error.
Until all the
nova-compute
services that run the ironic driver are upgraded to the Train code that handles thepower-update
callbacks from ironic, the[nova]/send_power_notifications
config option can be kept disabled in ironic.
The libvirt driver’s RBD imagebackend no longer supports setting force_raw_images to False. Setting force_raw_images = False and images_type = rbd in nova.conf will cause the nova compute service to refuse to start. To fix this, set force_raw_images = True. This change was required to fix the bug 1816686.
Note that non-raw cache image files will be removed if you set force_raw_images = True and images_type = rbd now.
The
nova-manage
set of commands would previously exit with return code 1 due to any unexpected error. However, some commands, such asnova-manage db archive_deleted_rows
,nova-manage cell_v2 map_instances
andnova-manage placement heal_allocations
use return code 1 for flow control with automation. As a result, the unexpected error return code has been changed from 1 to 255 for allnova-manage
commands.
Previously, if
vcpu_pin_set
was not defined, the libvirt driver would count all available host CPUs when calculatingVCPU
inventory, regardless of whether those CPUs were online or not. The driver will now only report the total number of online CPUs. This should result in fewer build failures on hosts with offlined CPUs.
Live migration of an instance with PCI devices is now blocked in the following scenarios:
Instance with non-network related PCI device.
Instance with network related PCI device and either:
Neutron does not support extended port binding API.
Source or destination compute node does not support libvirt-sriov-live-migration.
Live migration will fail with a user friendly error.
참고
Previously, the operation would have failed with an obscure error resulting in the instance still running on the source node or ending up in an inoperable state.
The
max_concurrent_live_migrations
configuration option has been restricted by the minimum value and now raises a ValueError if the value is less than 0.
For the libvirt driver, the NUMA-aware live migration feature requires the conductor, source compute, and destination compute to be upgraded to Train. It also requires the conductor and source compute to be able to send RPC 5.3 - that is, their
[upgrade_levels]/compute
configuration option must not be set to less than 5.3 or a release older than “train”.In other words, NUMA-aware live migration with the libvirt driver is not supported until:
All compute and conductor services are upgraded to Train code.
The
[upgrade_levels]/compute
RPC API pin is removed (or set to “auto”) and services are restarted.
If any of these requirements are not met, live migration of instances with a NUMA topology with the libvirt driver will revert to the legacy naive behavior, in which the instance was simply moved over without updating its NUMA guest to host mappings or its resource usage.
참고
The legacy naive behavior is dependent on the value of the
[workarounds]/enable_numa_live_migration
option. Refer to the Deprecations sections for more details.
If you upgraded your OpenStack deployment to Stein without switching to use the now independent placement service, you must do so before upgrading to Train. Instructions for one way to do this are available.
It is now possible to count quota usage for cores and ram from the placement service and instances from instance mappings in the API database instead of counting resources from cell databases. This makes quota usage counting resilient in the presence of down or poor-performing cells.
Quota usage counting from placement is opt-in via the
[quota]count_usage_from_placement
configuration option.There are some things to note when opting in to counting quota usage from placement:
Counted usage will not be accurate in an environment where multiple Nova deployments are sharing a placement deployment because currently placement has no way of partitioning resource providers between different Nova deployments. Operators who are running multiple Nova deployments that share a placement deployment should not set the
[quota]count_usage_from_placement
configuration option toTrue
.Behavior will be different for resizes. During a resize, resource allocations are held on both the source and destination (even on the same host, see https://bugs.launchpad.net/nova/+bug/1790204) until the resize is confirmed or reverted. Quota usage will be inflated for servers in the
VERIFY_RESIZE
state and operators should weigh the advantages and disadvantages before enabling[quota]count_usage_from_placement
.The
populate_queued_for_delete
andpopulate_user_id
online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of anEXISTS
database query during each quota check, if[quota]count_usage_from_placement
is set toTrue
. Operators who want to avoid the performance hit from theEXISTS
queries should wait to set the[quota]count_usage_from_placement
configuration option toTrue
until after they have completed their online data migrations vianova-manage db online_data_migrations
.Behavior will be different for unscheduled servers in
ERROR
state. A server inERROR
state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram.Behavior will be different for servers in
SHELVED_OFFLOADED
state. A server inSHELVED_OFFLOADED
state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved.
The
--version
argument has been removed in the following commands. Use theVERSION
positional argument instead.nova-manage db sync
nova-manage api_db sync
The cells v1 feature has been deprecated since the 16.0.0 Pike release and has now been removed. The
nova-cells
service andnova-manage cells
commands have been removed, while thenova-manage cell_v2 simple_cell_setup
command will no longer check if cells v1 is enabled and therefore can no longer exit with2
.The cells v1 specific REST APIs have been removed along with their related policy rules. Calling these APIs will now result in a
410 (Gone)
error response.GET /os-cells
POST /os-cells
GET /os-cells/capacities
GET /os-cells/detail
GET /os-cells/info
POST /os-cells/sync_instances
GET /os-cells/{cell_id}
PUT /os-cells/{cell_id}
DELETE /os-cells/{cell_id}
GET /os-cells/{cell_id}/capacities
The cells v1 specific policies have been removed.
cells_scheduler_filter:DifferentCellFilter
cells_scheduler_filter:TargetCellFilter
The cells v1 specific configuration options, previously found in
cells
, have been removed.enabled
name
capabilities
call_timeout
reserve_percent
cell_type
mute_child_interval
bandwidth_update_interval
instance_update_sync_database_limit
mute_weight_multiplier
ram_weight_multiplier
offset_weight_multiplier
instance_updated_at_threshold
instance_update_num_instances
max_hop_count
scheduler
rpc_driver_queue_base
scheduler_filter_classes
scheduler_weight_classes
scheduler_retries
scheduler_retry_delay
db_check_interval
cells_config
In addition, the following cells v1 related RPC configuration options, previously found in
upgrade_levels
, have been removed.cells
intercell
The
CoreFilter
,DiskFilter
andRamFilter
, which were deprecated in Stein (19.0.0), are now removed.VCPU
,DISK_GB
andMEMORY_MB
filtering is performed natively using the Placement service. These filters have been warning operators at startup that they conflict with proper operation of placement and should have been disabled since approximately Pike. If you did still have these filters enabled and were relying on them to account for virt driver overhead (at the expense of scheduler races and retries), see the scheduler documentation about the topic.
The
[DEFAULT]/default_flavor
option deprecated in 14.0.0 (Newton) has been removed.
The
image_info_filename_pattern
,checksum_base_images
, andchecksum_interval_seconds
options have been removed in the[libvirt]
config section.
Config option
[ironic]api_endpoint
was deprecated in the 17.0.0 Queens release and is now removed. To achieve the same effect, set the[ironic]endpoint_override
option. (However, it is preferred to omit this setting and let the endpoint be discovered via the service catalog.)
The
nova-consoleauth
service has been deprecated since the 18.0.0 Rocky release and has now been removed. The following configuration options have been removed:[upgrade_levels] consoleauth
[workarounds] enable_consoleauth
A check for the use of the
nova-consoleauth
service, added to thenova-status upgrade check
CLI in Rocky, is now removed.
The
[neutron]/url
configuration option, which was deprecated in the 17.0.0 Queens release, has now been removed. The same functionality is available via the[neutron]/endpoint_override
option.
The Libvirt SR-IOV migration feature intoduced in this release requires both the source and destination node to support the feature. As a result it will be automatically disabled until the conductor and compute nodes have been upgraded.
The block-storage (cinder) version 3.44 API is now required when working with volume attachments. A check has been added to the
nova-status upgrade check
command for this requirement.
To resolve bug 1805659 the default value of
[notifications]/notification_format
is changed fromboth
tounversioned
. For more information see the documentation of the config option. If you are using versioned notifications, you will need to adjust your config toversioned
”
지원 종료된 기능 노트¶
The
vcpu_pin_set
configuration option has been deprecated. You should migrate host CPU configuration to the[compute] cpu_dedicated_set
or[compute] cpu_shared_set
config options, or both. Refer to the help text of these config options for more information.
The
AggregateCoreFilter
,AggregateRamFilter
andAggregateDiskFilter
are now deprecated. They will be removed in a future release and should no longer be used. Their functionality has been replaced with a placement native approach by combining host aggregate mirroring added in Rocky and initial allocation ratios added in Stein. See the scheduler documentation for details.
The
RetryFilter
is deprecated and will be removed in an upcoming release. Since the 17.0.0 (Queens) release, the scheduler has provided alternate hosts for rescheduling so the scheduler does not need to be called during a reschedule which makes theRetryFilter
useless. See the Return Alternate Hosts spec for details.
The xenapi driver is deprecated and may be removed in a future release. The driver is not tested by the OpenStack project nor does it have clear maintainer(s) and thus its quality can not be ensured. If you are using the driver in production please let us know in freenode IRC and/or the openstack-discuss mailing list.
With the introduction of the NUMA-aware live migration feature for the libvirt driver,
[workarounds]/enable_numa_live_migration
is deprecated. Once a cell has been fully upgraded to Train, its value is ignored.참고
Even in a cell fully upgraded to Train, RPC pinning via
[upgrade_levels]/compute
can cause live migration of instances with a NUMA topology to revert to the legacy naive behavior. For more details refer to the Upgrade section.
Compatibility code for compute drivers that do not implement the update_provider_tree interface is deprecated and will be removed in a future release.
보안 이슈¶
OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433)
This release contains a security fix for bug 1837877 where users without the admin role can be exposed to sensitive error details in the server resource fault
message
.There is a behavior change where non-nova exceptions will only record the exception class name in the fault
message
field which is exposed to all users, regardless of the admin role.The fault
details
, which are only exposed to users with the admin role, will continue to include the traceback and also include the exception value which for non-nova exceptions is what used to be exposed in the faultmessage
field. Meaning, the information that admins could see for server faults is still available, but the exception value may be indetails
rather thanmessage
now.
The transition from rootwrap (or sudo) to privsep has been completed for nova. The only case where rootwrap is still used is to start privsep helpers. All other rootwrap configurations for nova may now be removed.
버그 수정¶
By incorporating oslo fixes for bug 1715374 and bug 1794708, the nova-compute service now handles
SIGHUP
properly.
Fixes a bug causing mount failures on systemd based systems that are using the systemd-run based mount with the Nova Quobyte driver.
The os-volume_attachments update API, commonly referred to as the swap volume API will now return a
400
(BadRequest) error when attempting to swap from a multi attached volume with more than one active read/write attachment resolving bug #1775418.
Blueprints hide-hypervisor-id-flavor-extra-spec and add-kvm-hidden-feature enabled NVIDIA drivers in Linux guests using KVM and QEMU, but support was not included for Windows guests. This is now fixed. See bug 1779845 for details.
Bug 1811726 is fixed by deleting the resource provider (in placement) associated with each compute node record managed by a
nova-compute
service when that service is deleted via theDELETE /os-services/{service_id}
API. This is particularly important for compute services managing ironic baremetal nodes.
Unsetting ‘[DEFAULT] dhcp_domain’ will now correctly result in the metadata service/config drive providing an instance hostname of ‘${hostname}’ instead of ‘${hostname}None’, as was previously seen.
Fixes a bug that caused Nova to fail on mounting Quobyte volumes whose volume URL contained multiple registries.
Add support for noVNC >= v1.1.0 for VNC consoles. Prior to this fix, VNC console token validation always failed regardless of actual token validity with noVNC >= v1.1.0. See https://bugs.launchpad.net/nova/+bug/1822676 for more details.
There had been bug 1777591 that placement filters out the specified target host when deploying an instance by the random limitation. In previous releases the bug has been worked around by unlimiting the results from the Placement service if the target host is specified. From this release, the Nova scheduler uses more optimized path retrieving only the target host information from placement. Note that it still uses the unlimit workaround if a target host is specified without a specific node and multiple nodes are found for the target host. This can happen in some of the virt drivers such as the Ironic driver.
Update the way QEMU cache mode is configured for Nova guests: If the file system hosting the directory with Nova instances is capable of Linux’s O_DIRECT, use
none
; otherwise fallback towriteback
cache mode. This improves performance without compromising data integrity. Bug 1818847.Context: What makes
writethrough
so safe against host crashes is that it never keeps data in a “write cache”, but it calls fsync() after every write. This is also what makes it horribly slow. But cache modenone
doesn’t do this and therefore doesn’t provide this kind of safety. The guest OS must explicitly flush the cache in the right places to make sure data is safe on the disk; and all modern OSes flush data as needed. So if cache modenone
is safe enough for you, thenwriteback
should be safe enough too.
기타 기능¶
A new
[libvirt]/rbd_connect_timeout
configuration option has been introduced to limit the time spent waiting when connecting to a RBD cluster via the RADOS API. This timeout currently defaults to 5 seconds.This aims to address issues reported in bug 1834048 where failures to initially connect to a RBD cluster left the nova-compute service inoperable due to constant RPC timeouts being hit.
Numbered request groups can be defined in the flavor extra_spec but they can come from other sources as well (e.g. neutron ports). If there is more than one numbered request group in the allocation candidate query and the flavor does not specify any group policy then the query will fail in placement as group_policy is mandatory in this case. Nova previously printed a warning to the scheduler logs but let the request fail. However the creator of the flavor cannot know if the flavor later on will be used in a boot request that has other numbered request groups. So nova will start defaulting the group_policy to ‘none’ which means that the resource providers fulfilling the numbered request groups can overlap. Nova will only default the group_policy if it is not provided in the flavor extra_spec, and there is more than one numbered request group present in the final request, and the flavor only provided one or zero of such groups.
A
--dry-run
option has been added to thenova-manage placement heal_allocations
CLI which allows running the command to get output without committing any changes to placement.
An
--instance
option has been added to thenova-manage placement heal_allocations
CLI which allows running the command on a specific instance given its UUID.
The
nova-manage placement heal_allocations
CLI has been extended to heal missing port allocations which are possible due to bug 1819923 .
The code for the placement service was moved to its own repository in Stein. The placement code in nova has been deleted.
The reporting for bytes available for RBD has been enhanced to accomodate unrecommended Ceph deployments where multiple OSDs are running on a single disk. The new reporting method takes the number of configured replicas into consideration when reporting bytes available.
The
dhcp_domain
option has been undeprecated and moved to the[api]
group. It is used by the metadata service to configure fully-qualified domain names for instances, in addition to its role configuring DHCP services for nova-network. This use case was missed when deprecating the option initially.