Pike Series Release Notes¶
16.1.8-57¶
보안 이슈¶
OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433)
This release contains a security fix for bug 1837877 where users without the admin role can be exposed to sensitive error details in the server resource fault
message
.There is a behavior change where non-nova exceptions will only record the exception class name in the fault
message
field which is exposed to all users, regardless of the admin role.The fault
details
, which are only exposed to users with the admin role, will continue to include the traceback and also include the exception value which for non-nova exceptions is what used to be exposed in the faultmessage
field. Meaning, the information that admins could see for server faults is still available, but the exception value may be indetails
rather thanmessage
now.
버그 수정¶
Bug 1811726 is fixed by deleting the resource provider (in placement) associated with each compute node record managed by a
nova-compute
service when that service is deleted via theDELETE /os-services/{service_id}
API. This is particularly important for compute services managing ironic baremetal nodes.
The
os-simple-tenant-usage
pagination has been fixed. In some cases, nova usage-list would have returned incorrect results because of this. See bug https://launchpad.net/bugs/1796689 for details.
16.1.8¶
버그 수정¶
It is now possible to configure the
[cinder]
section of nova.conf to allow setting admin-role credentials for scenarios where a user token is not available to perform actions on a volume. For example, whenreclaim_instance_interval
is a positive integer, instances are soft deleted until the nova-compute service periodic task removes them. If a soft deleted instance has volumes attached, the compute service needs to be able to detach and possibly delete the associated volumes, otherwise they will be orphaned in the block storage service. Similarly, ifrunning_deleted_instance_poll_interval
is set andrunning_deleted_instance_action = reap
, then the compute service will need to be able to detach and possibly delete volumes attached to instances that are reaped. See bug 1733736 and bug 1734025 for more details.
Fixes an issue with cold migrating (resizing) an instance from ocata to pike compute by correcting parameters order in resize_instance rpcapi call to destination compute.
16.1.7¶
버그 수정¶
When testing whether direct IO is possible on the backing storage for an instance, Nova now uses a block size of 4096 bytes instead of 512 bytes, avoiding issues when the underlying block device has sectors larger than 512 bytes. See bug https://launchpad.net/bugs/1801702 for details.
16.1.5¶
Upgrade Notes¶
The
nova-api
service now requires the[placement]
section to be configured in nova.conf if you are using a separate config file just for that service. This is because thenova-api
service now needs to talk to the placement service in order to delete resource provider allocations when deleting an instance and thenova-compute
service on which that instance is running is down. This change is idempotent if[placement]
is not configured innova-api
but it will result in new warnings in the logs until configured. See bug https://bugs.launchpad.net/nova/+bug/1679750 for more details.
The default list of non-inherited image properties to pop when creating a snapshot has been extended to include image signature properties. The properties
img_signature_hash_method
,img_signature
,img_signature_key_type
andimg_signature_certificate_uuid
are no longer inherited by the snapshot image as they would otherwise result in a Glance attempting to verify the snapshot image with the signature of the original.
A new online data migration has been added to populate missing instance.availability_zone values for instances older than Pike whose availability_zone was not specified during boot time. This can be run during the normal
nova-manage db online_data_migrations
routine. This fixes Bug 1768876
보안 이슈¶
A new policy rule,
os_compute_api:servers:create:zero_disk_flavor
, has been introduced which defaults torule:admin_or_owner
for backward compatibility, but can be configured to make the compute API enforce that server create requests using a flavor with zero root disk must be volume-backed or fail with a403 HTTPForbidden
error.Allowing image-backed servers with a zero root disk flavor can be potentially hazardous if users are allowed to upload their own images, since an instance created with a zero root disk flavor gets its size from the image, which can be unexpectedly large and exhaust local disk on the compute host. See https://bugs.launchpad.net/nova/+bug/1739646 for more details.
While this is introduced in a backward-compatible way, the default will be changed to
rule:admin_api
in a subsequent release. It is advised that you communicate this change to your users before turning on enforcement since it will result in a compute API behavior change.
The ‘SSBD’ and ‘VIRT-SSBD’ cpu flags have been added to the list of available choices for the
[libvirt]/cpu_model_extra_flags
config option. These are important for proper mitigation of the Spectre 3a and 4 CVEs. Note that the use of either of these flags require updated packages below nova, including libvirt, qemu (specifically >=2.9.0 for virt-ssbd), linux, and system firmware. For more information see https://www.us-cert.gov/ncas/alerts/TA18-141A
버그 수정¶
The
DELETE /os-services/{service_id}
compute API will now return a409 HTTPConflict
response when trying to delete anova-compute
service which is still hosting instances. This is because doing so would orphan the compute node resource provider in the placement service on which those instances have resource allocations, which affects scheduling. See https://bugs.launchpad.net/nova/+bug/1763183 for more details.
16.1.2¶
Prelude¶
This release includes fixes for security vulnerabilities.
보안 이슈¶
[CVE-2017-18191] Swapping encrypted volumes can lead to data loss and a possible compute host DOS attack.
버그 수정¶
The libvirt driver now allows specifying individual CPU feature flags for guests, via a new configuration attribute
[libvirt]/cpu_model_extra_flags
– only withcustom
as the[libvirt]/cpu_model
. Refer to its documentation innova.conf
for usage details.One of the motivations for this is to alleviate the performance degradation (caused as a result of applying the “Meltdown” CVE fixes) for guests running with certain Intel-based virtual CPU models. This guest performance impact is reduced by exposing the CPU feature flag ‘PCID’ (“Process-Context ID”) to the guest CPU, assuming that it is available in the physical hardware itself.
Note that besides
custom
, Nova’s libvirt driver has two other CPU modes:host-model
(which is the default), andhost-passthrough
. Refer to the[libvirt]/cpu_model_extra_flags
documentation for what to do when you are using either of those CPU modes in context of ‘PCID’.
16.1.1¶
버그 수정¶
The nova-manage discover_hosts command now has a
--by-service
option which allows discovering hosts in a cell purely by the presence of a nova-compute binary. At this point, there is no need to use this unless you’re using ironic, as it is less efficient. However, if you are using ironic, this allows discovery and mapping of hosts even when no ironic nodes are present.
prevent swap_volume action if the instance is in state SUSPENDED, STOPPED or SOFT_DELETED. A conflict (409) will be raised now as previously it used to fail silently.
16.1.0¶
Upgrade Notes¶
On AArch64 architecture
cpu_mode
for libvirt is set tohost-passthrough
by default.AArch64 currently lacks
host-model
support because neither libvirt nor QEMU are able to tell what the host CPU model exactly is and there is no CPU description code for ARM(64) at this point.경고
host-passthrough
mode will completely break live migration, unless all the Compute nodes (running libvirtd) have identical CPUs.
Starting in Ocata, there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling for the FilterScheduler. Instead, overcommit values must be set on a per-compute-node basis in the Nova configuration files.
If you have been relying on per-aggregate overcommit, during your upgrade, you must change to using per-compute-node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered.
You can safely remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter from your
[filter_scheduler]enabled_filters
and you do not need to replace them with any other core/ram/disk filters. The placement query in the FilterScheduler takes care of the core/ram/disk filtering, so CoreFilter, RamFilter, and DiskFilter are redundant.Please see the mailing list thread for more information: http://lists.openstack.org/pipermail/openstack-operators/2018-January/014748.html
This release contains a schema migration for the
nova_api
database in order to address bug 1738094:https://bugs.launchpad.net/nova/+bug/1738094
The migration is optional and can be postponed if you have not been affected by the bug. The bug manifests itself through “Data too long for column ‘spec’” database errors.
버그 수정¶
The
delete_host
command has been added innova-manage cell_v2
to delete a host from a cell (host mappings). Theforce
option has been added innova-manage cell_v2 delete_cell
. If theforce
option is specified, a cell can be deleted even if the cell has hosts.
If scheduling fails during rebuild the server instance will go to ERROR state and a fault will be recorded. Bug 1744325
16.0.4¶
알려진 이슈¶
In 16.0.0 Pike release, quota limits are checked in a new fashion after change 5c90b25e49d47deb7dc6695333d9d5e46efe8665 and a new config option
[quota]/recheck_quota
has been added in change eab1d4b5cc6dd424c5c7dfd9989383a8e716cae5 to recheck quota after resource creation to prevent allowing quota to be exceeded as a result of racing requests. These changes could lead to requests blocked by over quota resulting in instances in theERROR
state, rather than no instance records as before. Refer to https://bugs.launchpad.net/nova/+bug/1716706 for detailed bug report.
보안 이슈¶
OSSA-2017-006: Nova FilterScheduler doubles resource allocations during rebuild with new image (CVE-2017-17051)
By repeatedly rebuilding an instance with new images, an authenticated user may consume untracked resources on a hypervisor host leading to a denial of service. This regression was introduced with the fix for OSSA-2017-005 (CVE-2017-16239), however, only Nova stable/pike or later deployments with that fix applied and relying on the default FilterScheduler are affected.
The fix is in the nova-api and nova-scheduler services.
참고
The fix for errata in OSSA-2017-005 (CVE-2017-16239) will need to be applied in addition to this fix.
버그 수정¶
The fix for OSSA-2017-005 (CVE-2017-16239) was too far-reaching in that rebuilds can now fail based on scheduling filters that should not apply to rebuild. For example, a rebuild of an instance on a disabled compute host could fail whereas it would not before the fix for CVE-2017-16239. Similarly, rebuilding an instance on a host that is at capacity for vcpu, memory or disk could fail since the scheduler filters would treat it as a new build request even though the rebuild is not claiming new resources.
Therefore this release contains a fix for those regressions in scheduling behavior on rebuild while maintaining the original fix for CVE-2017-16239.
참고
The fix relies on a
RUN_ON_REBUILD
variable which is checked for all scheduler filters during a rebuild. The reasoning behind the value for that variable depends on each filter. If you have out-of-tree scheduler filters, you will likely need to assess whether or not they need to override the default value (False) for the new variable.
This release includes a fix for bug 1733886 which was a regression introduced in the 2.36 API microversion where the
force
parameter was missing from thePUT /os-quota-sets/{tenant_id}
API request schema so users could not force quota updates with microversion 2.36 or later. The bug is now fixed so that theforce
parameter can once again be specified during quota updates. There is no new microversion for this change since it is an admin-only API.
16.0.3¶
보안 이슈¶
OSSA-2017-005: Nova Filter Scheduler bypass through rebuild action
By rebuilding an instance, an authenticated user may be able to circumvent the FilterScheduler bypassing imposed filters (for example, the ImagePropertiesFilter or the IsolatedHostsFilter). All setups using the FilterScheduler (or CachingScheduler) are affected.
The fix is in the nova-api and nova-conductor services.
버그 수정¶
Fixes bug 1695861 in which the aggregate API accepted requests that have availability zone names including ‘:’. With this fix, a creation of an availabilty zone whose name includes ‘:’ results in a
400 BadRequest
error response.
Fixes a bug preventing ironic nodes without VCPUs, memory or disk in their properties from being picked by nova.
16.0.2¶
Upgrade Notes¶
A new
keystone
config section is added so that you can set session link attributes for communicating with keystone. This allows the use of custom certificates to secure the link between Nova and Keystone.
기타 기능¶
The ironic driver will automatically migrate instance flavors for resource classes at runtime. If you are not able to run the compute and ironic services at pike because you are automating an upgrade past this release, you can use the
nova-manage db ironic_flavor_migration
to push the migration manually. This is only for advanced users taking on the risk of automating the process of upgrading through pike and is not recommended for normal users.
16.0.1¶
Upgrade Notes¶
The
nova-conductor
service now needs access to the Placement service in the case of forcing a destination host during a live migration. Ensure the[placement]
section of nova.conf for thenova-conductor
service is filled in.
버그 수정¶
When forcing a specified destination host during live migration, the scheduler is bypassed but resource allocations will still be made in the Placement service against the forced destination host. If the resource allocation against the destination host fails, the live migration operation will fail, regardless of the
force
flag being specified in the API. The guest will be unchanged on the source host. For more details, see bug 1712008.
When forcing a specified destination host during evacuate, the scheduler is bypassed but resource allocations will still be made in the Placement service against the forced destination host. If the resource allocation against the destination host fails, the evacuate operation will fail, regardless of the
force
flag being specified in the API. The guest will be unchanged on the source host. For more details, see bug 1713786.
It is now possible to unset the
[vnc]keymap
and[spice]keymap
configuration options. These were known to cause issues for some users with non-US keyboards and may be deprecated in the future.
16.0.0¶
Prelude¶
This release includes fixes for security vulnerabilities.
The 16.0.0 release includes many new features and bug fixes. It is difficult to cover all the changes that have been introduced. Please at least read the upgrade section which describes the required actions to upgrade your cloud from 15.0.0 (Ocata) to 16.0.0 (Pike).
That said, a few major changes are worth mentioning. This is not an exhaustive list:
The latest Compute API microversion supported for Pike is v2.53. Details on REST API microversions added since the 15.0.0 Ocata release can be found in the REST API Version History page.
The FilterScheduler driver now provides allocations to the Placement API, which helps concurrent schedulers to verify resource consumptions directly without waiting for compute services to ask for a reschedule in case of a race condition. That is an important performance improvement that includes allowing one to use more than one scheduler worker if there are capacity concerns. For more details, see the Pike Upgrade Notes for Placement.
Nova now supports a Cells v2 multi-cell deployment. The default deployment is a single cell. There are known limitations with multiple cells. Refer to the Cells v2 Layout page for more information about deploying multiple cells.
Cells v1 is now deprecated in favor of Cells v2.
The quota system has been reworked to count resources at the point of creation rather than using a reserve/commit/rollback approach. No operator impacts are expected.
Compute-specific documentation is being migrated from http://docs.openstack.org to https://docs.openstack.org/nova/ and the layout for the Nova developer documentation is being re-organized. If you think anything is missing or you now have broken bookmarks, please report a bug.
새로운 기능¶
The versioned_notifications_topic configuration option; This enables one to configure the topics used for versioned notifications.
Adds interface attach/detach support to baremetal nodes using ironic virt driver. Note that the instance info cache update relies on getting a
network-changed
event from neutron, or on the periodic task healing the instance info cache, both of which are asynchronous. This means that nova’s cached network information (which is what is sent e.g. in theGET /servers
responses) may not be up to date immediately after the attachment or detachment.
Add support for LAN9118 as a valid nic for hw_vif_model property in qemu.
The fields
locked
anddisplay_description
have been added to InstancePayload. Versioned notifications for instance actions will include these fields.A few examples of versioned notifications that use InstancePayload:
instance.create
instance.delete
instance.resize
instance.pause
Add
PCIWeigher
weigher. This can be used to ensure non-PCI instances don’t occupy resources on hosts with PCI devices. This can be configured using the[filter_scheduler] pci_weight_multiplier
configuration option.
The network.json metadata format has been amended for IPv6 networks under Neutron control. The type that is shown has been changed from being always set to
ipv6_dhcp
to correctly reflecting theipv6_address_mode
option in Neutron, so the type now will beipv6_slaac
,ipv6_dhcpv6-stateless
oripv6_dhcpv6-stateful
.
Enables to launch an instance from an iscsi volume with ironic virt driver. This feature requires an ironic service supporting API version 1.32 or later, which is present in ironic releases > 8.0. It also requires python-ironicclient >= 1.14.0.
The model name vhostuser_vrouter_plug is set by the neutron contrail plugin during a VM (network port) creation. The libvirt compute driver now supports plugging virtual interfaces of type “contrail_vrouter” which are provided by the contrail-nova-vif-driver plugin [1]. [1] https://github.com/Juniper/contrail-nova-vif-driver
Added microversion v2.48 which standardize VM diagnostics response. It has a set of fields which each hypervisor will try to fill. If a hypervisor driver is unable to provide a specific field then this field will be reported as ‘None’.
Microversion 2.53 changes service and hypervisor IDs to UUIDs to ensure uniqueness across cells. Prior to this, ID collisions were possible in multi-cell deployments. See the REST API Version History and Compute API reference for details.
The nova-compute worker can automatically disable itself in the service database if consecutive build failures exceed a set threshold. The
[compute]/consecutive_build_service_disable_threshold
configuration option allows setting the threshold for this behavior, or disabling it entirely if desired. The intent is that an admin will examine the issue before manually re-enabling the service, which will avoid that compute node becoming a black hole build magnet.
Supports a new method for deleting all inventory for a resource provider
DELETE /resource-providers/{uuid}/inventories
Return codes
204 NoContent on success
404 NotFound for missing resource provider
- 405 MethodNotAllowed if a microversion is specified that is before
this change (1.5)
- 409 Conflict if inventory in use or if some other request concurrently
updates this resource provider
Requires OpenStack-API-Version placement 1.5
The discover_hosts_in_cells_interval periodic task in the scheduler is now more efficient in that it can specifically query unmapped compute nodes from the cell databases instead of having to query them all and compare against existing host mappings.
A new 2.47 microversion was added to the Compute API. Users specifying this microversion or later will see the “flavor” information displayed as a dict when displaying server details via the servers REST API endpoint. If the user is prevented by policy from indexing extra-specs, then the “extra_specs” field will not be included in the flavor information.
The libvirt compute driver now supports attaching volumes of type “drbd”. See the DRBD documentation for more information.
Add granularity to the
os_compute_api:os-flavor-manage
policy with the addition of distinct actions for create and delete:os_compute_api:os-flavor-manage:create
os_compute_api:os-flavor-manage:delete
To address backwards compatibility, the new rules added to the flavor_manage.py policy file, default to the existing rule,
os_compute_api:os-flavor-manage
, if it is set to a non-default value.
Some hypervisors add a signature to their guests, e.g. KVM is adding
KVMKVMKVM\0\0\0
, Xen:XenVMMXenVMM
. The existence of a hypervisor signature enables some paravirtualization features on the guest as well as disallowing certain drivers which test for the hypervisor to load e.g. Nvidia driver [1]: “The latest Nvidia driver (337.88) specifically checks for KVM as the hypervisor and reports Code 43 for the driver in a Windows guest when found. Removing or changing the KVM signature is sufficient for the driver to load and work.”The new
img_hide_hypervisor_id
image metadata property hides the hypervisor signature for the guest.Currently only the libvirt compute driver can hide hypervisor signature for the guest host.
To verify if hiding hypervisor id is working on Linux based system:
$ cpuid | grep -i hypervisor_id
The result should not be (for KVM hypervisor):
$ hypervisor_id = KVMKVMKVM\0\0\0
You can enable this feature by setting the
img_hide_hypervisor_id=true
property in a Glance image.
The 1.7 version of the placement API changes handling of PUT /resource_classes/{name} to be a create or verification of the resource class with {name}. If the resource class is a custom resource class and does not already exist it will be created and a
201
response code returned. If the class already exists the response code will be204
. This makes it possible to check or create a resource class in one request.
This release adds support for Netronome’s Agilio OVS VIF type. In order to use the accelerated plugging modes, external Neutron and OS-VIF plugins are required. Consult https://github.com/Netronome/agilio-ovs-openstack-plugin for installation and operation instructions. Consult the Agilio documentation available at https://support.netronome.com/ for more information about the plugin compatibility and support matrix.
The
virtio-forwarder
VNIC type has been added to the list of VNICs. This VNIC type is intended to request a low-latency virtio port inside the instance, likely backed by hardware acceleration. Currently the Agilio OVS external Neutron and OS-VIF plugins provide support for this VNIC mode.
It is now possible to signal and perform an online volume size change as of the 2.51 microversion using the
volume-extended
external event. Nova will perform the volume extension so the host can detect its new size. It will also resize the device in QEMU so instance can detect the new disk size without rebooting.Currently only the libvirt compute driver with iSCSI and FC volumes supports the online volume size change.
The 2.51 microversion exposes the
events
field in the response body for theGET /servers/{server_id}/os-instance-actions/{request_id}
API. This is useful for API users to monitor when a volume extend operation completes for the given server instance. By default only users with the administrator role will be able to see eventtraceback
details.
Nova now uses oslo.middleware for request_id processing. This means that there is now a new
X-OpenStack-Request-ID
header returned on every request which mirrors the content of the existingX-Compute-Request-ID
. The expected existence of this header is signaled by Microversion 2.46. If server version >= 2.46, you can expect to see this header in your results (regardless of microversion requested).
A new 1.10 API microversion is added to the Placement REST API. This microversion adds support for the GET /allocation_candidates resource endpoint. This endpoint returns information about possible allocation requests that callers can make which meet a set of resource constraints supplied as query string parameters. Also returned is some inventory and capacity information for the resource providers involved in the allocation candidates.
The placement API service can now be configured to support CORS. If a cors configuration group is present in the service’s configuration file (currently nova.conf), with allowed_origin configured, the values within will be used to configure the middleware. If cors.allowed_origin is not set, the middleware will not be used.
Traits are added to the placement with Microversion 1.6.
GET /traits: Returns all resource classes.
PUT /traits/{name}: To insert a single custom trait.
GET /traits/{name}: To check if a trait name exists.
DELETE /traits/{name}: To delete the specified trait.
GET /resource_providers/{uuid}/traits: a list of traits associated with a specific resource provider
PUT /resource_providers/{uuid}/traits: Set all the traits for a specific resource provider
DELETE /resource_providers/{uuid}/traits: Remove any existing trait associations for a specific resource provider
A new configuration option
[quota]/recheck_quota
has been added to recheck quota after resource creation to prevent allowing quota to be exceeded as a result of racing requests. It defaults to True, which makes it impossible for a user to exceed their quota. However, it will be possible for a REST API user to be rejected with an over quota 403 error response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request. Operators may want to set the option to False to avoid additional load on the system if allowing quota to be exceeded because of racing requests is considered acceptable.
A new configuration option
reserved_host_cpus
has been added for compute services. It helps operators to provide how many physical CPUs they would like to reserve for the hypervisor separately from what the instances use.
Versioned instance.update notification will be sent when server’s tags field is updated.
Adds support to OVS vif type with direct port (SR-IOV). In order to use this OVS acceleration mode,
openvswitch
2.8.0 and ‘Linux Kernel’ 4.8 are required. This feature allows control of an SR-IOV virtual function (VF) via OpenFlow control plane and gain improved performance of ‘Open vSwitch’. Please note that in Pike release we can’t differentiate between SR-IOV hardware and OVS offloaded on the same host. This limitation should be resolved when the enable-sriov-nic-features will be completed. Until then operators can use host aggregates to ensure that they can schedule instances on specific hosts based on hardware.
Adds support for applying tags when creating a server. The tag schema is the same as in the 2.26 microversion.
Added support for Keystone middleware feature for interaction of Nova with the Glance API. With this support, if service token is sent along with the user token, then the expiration of user token will be ignored. In order to use this functionality a service user needs to be created first. Add the service user configurations in
nova.conf
underservice_user
group and setsend_service_user_token
flag toTrue
.참고
This feature is already implemented for Nova interaction with the Cinder and Neutron APIs in Ocata.
The libvirt compute driver now supports connecting to Veritas HyperScale volume backends.
Microversion 2.49 brings device role tagging to the attach operation of volumes and network interfaces. Both network interfaces and volumes can now be attached with an optional
tag
parameter. The tag is then exposed to the guest operating system through the metadata API. Unlike the original device role tagging feature, tagged attach does not support the config drive. Because the config drive was never designed to be dynamic, it only contains device tags that were set at boot time with API 2.32. Any changes made to tagged devices with API 2.49 while the server is running will only be reflected in the metadata obtained from the metadata API. Because of metadata caching, changes may take up tometadata_cache_expiration
to appear in the metadata API. The default value formetadata_cache_expiration
is 15 seconds.Tagged volume attachment is not supported for shelved-offloaded instances. Tagged device attachment (both volumes and network interfaces) is not supported for Cells V1 deployments.
The following volume attach and volume detach versioned notifications have been added to the nova-compute service:
instance.volume_attach.start
instance.volume_attach.end
instance.volume_attach.error
instance.volume_detach.start
instance.volume_detach.end
The
XenAPI
compute driver now supports creating servers with virtual interface and block device tags which was introduced in the2.32
microversion.Note that multiple paths will exist for a tagged disk for the following reasons:
HVM guests may not have the paravirtualization (PV) drivers installed, in which case the disk will be accessible on the
ide
bus. When the PV drivers are installed the disk will be accessible on thexen
bus.Windows guests with PV drivers installed expose devices in a different way to Linux guests with PV drivers. Linux systems will see disk paths under
/sys/devices/
, but Windows guests will see them in the registry, for exampleHKLM\System\ControlSet001\Enum\SCSIDisk
. These two disks are both on thexen
bus.
See the following XenAPI documentation for details: http://xenbits.xen.org/docs/4.2-testing/misc/vbd-interface.txt
알려진 이슈¶
Due to bug 1707256, shared storage modeling in Placement is not supported by the scheduler. This means that in the Pike release series, an operator will be unable to model a shared storage pool between two or more compute hosts using the Placement service for scheduling and resource tracking.
This is not a regression, just a note about functionality that is not yet available. Support for modeling shared storage providers will be worked on in the Queens release.
The live-migration progress timeout controlled by the configuration option
[libvirt]/live_migration_progress_timeout
has been discovered to frequently cause live-migrations to fail with a progress timeout error, even though the live-migration is still making good progress. To minimize problems caused by these checks we have changed the default to 0, which means do not trigger a timeout. To modify when a live-migration will fail with a timeout error, please now look at[libvirt]/live_migration_completion_timeout
and[libvirt]/live_migration_downtime
.
Due to the changes in scheduling of bare metal nodes, additional resources may be reported as free to Placement. This happens in two cases:
An instance is deployed with a flavor smaller than a node (only possible when exact filters are not used)
Node properties were modified in ironic for a deployed nodes
When such instances were deployed without using a custom resource class, it is possible for the scheduler to try deploying another instance on the same node. It will cause a failure in the compute and a scheduling retry.
The recommended work around is to assign a resource class to all ironic nodes, and use it for scheduling of bare metal instances.
In deployments with multiple (v2) cells, upcalls from the computes to the scheduler (or other control services) cannot occur. This prevents certain things from happening, such as the track_instance_changes updates, as well as the late affinity checks for server groups. See the related documentation on the scheduler.track_instance_changes and workarounds.disable_group_policy_check_upcall configuration options for more details. Single-cell deployments without any MQ isolation will continue to operate as they have for the time being.
Upgrade Notes¶
Interface attachment/detachment for ironic virt driver was implemented in in-tree network interfaces in ironic version 8.0, and this release is required for nova’s interface attachment feature to work. Prior to that release, calling VIF attach on an active ironic node using in-tree network interfaces would be basically a noop. It should not be an issue during the upgrade though, as it is required to upgrade ironic before nova.
A
default_floating_pool
configuration option has been added in the[neutron]
group. The existingdefault_floating_pool
option in the[DEFAULT]
group is retained and should be used by nova-network users. Neutron users meanwhile should migrate to the new option.
The information in the network.json metadata has been amended, for IPv6 networks under Neutron control, the
type
field has been changed from being always set toipv6_dhcp
to correctly reflecting theipv6_address_mode
option in Neutron.
The required ironic API version is updated to 1.32. The ironic service must be upgraded to an ironic release > 8.0 before nova is upgraded, otherwise all ironic intergration will fail.
The type of following config options have been changed from string to URI. They are checked whether they follow the URI format or not and its scheme.
api_endpoint
in theironic
groupmksproxy_base_url
in themks
grouphtml5_proxy_base_url
in therdp
groupserial_port_proxy_uri
in thevmware
group
The os-volume_attachments APIs no longer check
os_compute_api:os-volumes
policy. They do still checkos_compute_api:os-volumes-attachments
policy rules. Deployers who have customized policy should confirm that their settings for os-volume_attachments policy checks are sufficient.
The new configuration option
[compute]/consecutive_build_service_disable_threshold
defaults to a nonzero value, which means multiple failed builds will result in a compute node auto-disabling itself.
The
nova-manage project quota_usage_refresh
and its aliasnova-manage account quota_usage_refresh
commands have been renamednova-manage quota refresh
. Aliases are provided but these are marked as deprecated and will be removed in the next release of nova.
The default value for the
[xenserver]/vif_driver
configuration option has been changed tonova.virt.xenapi.vif.XenAPIOpenVswitchDriver
to match the default configuration of[DEFAULT]/use_neutron=True
.
The libvirt driver port filtering feature will now ignore the
use_ipv6
config option.The libvirt driver provides port filtering capability. This capability is enabled when the following is true:
The
nova.virt.libvirt.firewall.IptablesFirewallDriver
firewall driver is enabledSecurity groups are disabled
Neutron port filtering is disabled/unsupported
An IPTables-compatible interface is used, e.g. an OVS VIF in hybrid mode, where the VIF is a tap device connected to OVS with a bridge
When enabled, libvirt applies IPTables rules to all interface ports that provide MAC, IP, and ARP spoofing protection.
Previously, setting the
use_ipv6
config option toFalse
prevented the generation of IPv6 rules even when there were IPv6 subnets available. This was fine when using nova-network, where the same config option was used to control generation of these subnets. However, a mismatch between this nova option and equivalent IPv6 options in neutron would have resulted in IPv6 packets being dropped.Seeing as there was no apparent reason for not allowing IPv6 traffic when the network is IPv6-capable, we now ignore this option. Instead, we use the availability of IPv6-capable subnets as an indicator that IPv6 rules should be added.
The libvirt driver port filtering feature will now ignore the
allow_same_net_traffic
config option.The libvirt driver provides port filtering capability. This capability is enabled when the following is true:
The
nova.virt.libvirt.firewall.IptablesFirewallDriver
firewall driver is enabledSecurity groups are disabled
Neutron port filtering is disabled/unsupported
An IPTables-compatible interface is used, e.g. an OVS VIF in hybrid mode, where the VIF is a tap device connected to OVS with a bridge
When enabled, libvirt applies IPTables rules to all interface ports that provide MAC, IP, and ARP spoofing protection.
Previously, setting the allow_same_net_traffic config option to True allowed for same network traffic when using these port filters. This was the default case and was the only case tested. Setting this to False disabled same network traffic when using the libvirt driver port filtering functionality only, however, this was neither tested nor documented.
Given that there are other better documented and better tested ways to approach this, such as through use of neutron’s native port filtering or security groups, this functionality has been removed. Users should instead rely on one of these alternatives.
Three live-migration related configuration options were restricted by minimum values since 16.0.0 and will now raise a ValueError if these configuration options’ values less than minimum values, instead of logging warning before. These configuration options are:
live_migration_downtime
with minimum value 100live_migration_downtime_steps
with minimum value 3live_migration_downtime_delay
with minimum value 10
The
ssl
options were only used by Nova code that interacts with Glance client. These options are now defined and read by Keystoneauth.api_insecure
option from glance group is renamed toinsecure
. The following ‘’ssl’’ options are moved toglance
groupca_file
now calledcafile
cert_file
now calledcertfile
key_file
now calledkeyfile
Injected network templates will now ignore the
use_ipv6
config option.Nova supports file injection of network templates. Putting these in a config drive is the only way to configure networking without DHCP.
Previously, setting the
use_ipv6
config option toFalse
prevented the generation of IPv6 network info, even if there were IPv6 networks available. This was fine when using nova-network, where the same config option is used to control generation of these subnets. However, a mismatch between this nova option and equivalent IPv6 options in neutron would have resulted in IPv6 packets being dropped.Seeing as there was no apparent reason for not including IPv6 network info when IPv6 capable networks are present, we now ignore this option. Instead, we include info for all available networks in the template, be they IPv4 or IPv6.
In Ocata, the nova-scheduler would fall back to not calling the placement service during instance boot if old computes were running. That compatibility mode is no longer present in Pike, and as such, the scheduler fully depends on the placement service. This effectively means that in Pike Nova requires Placement API version 1.4 (Ocata).
The default policy on os-server-tags has been changed from
RULE_ANY
(allow all) toRULE_ADMIN_OR_OWNER
. This is because server tags should only be manipulated on servers owned by the user or admin. This doesn’t have any affect on how the API works.
The default value of the
[DEFAULT]/firewall_driver
configuration option has been changed tonova.virt.firewall.NoopFirewallDriver
to coincide with the default value of[DEFAULT]/use_neutron=True
.
The minimum required version of libvirt used by the nova-compute service is now 1.2.9. The minimum required version of QEMU used by the nova-compute service is now 2.1.0. Failing to meet these minimum versions when using the libvirt compute driver will result in the nova-compute service not starting.
Parts of the compute REST API are now relying on getting information from cells via their mappings in the
nova_api
database. This is to support multiple cells. For example, when listing compute hosts or services, all cells will be iterated in the API and the results will be returned.This change can have impacts, however, to deployment tooling that relies on parts of the API, like listing compute hosts, before the compute hosts are mapped using the
nova-manage cell_v2 discover_hosts
command.If you were using
nova hypervisor-list
after starting new nova-compute services to tell when to runnova-manage cell_v2 discover_hosts
, you should change your tooling to instead use one of the following commands:nova service-list --binary nova-compute [--host <hostname>] openstack compute service list --service nova-compute [--host <host>]
As a reminder, there is also the
[scheduler]/discover_hosts_in_cells_interval
configuration option which can be used to automatically discover hosts from the nova-scheduler service.
Quota limits and classes are being moved to the API database for Cells v2. In this release, the online data migrations will move any quota limits and classes you have in your main database to the API database, retaining all attributes.
참고
Quota limits and classes can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted quota limits and classes are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed.
The default policy for os_compute_api:os-quota-sets:detail has been changed to permit listing of quotas with details to project users, not only to admins.
The deprecated nova cert daemon is now removed. The /os-certificates API endpoint that depended on this service now returns 410 whenever it is called.
The deprecated /os-cloudpipe API endpoint has been removed. Whenever calls are made to that endpoint it now returns a 410 response.
Configuration option
console_driver
in theDEFAULT
group has been deprecated since the Ocata release and is now removed.
Deprecated config options to enable/disable extensions
extensions_blacklist
andextensions_whitelist
have been removed. This means all API extensions are always enabled. If you modifed policy, please double check you have the correct policy settings for all APIs.
All policy rules with the following naming scheme have been removed:
os_compute_api:{extension_alias}:discoverable
These policy rules were used to hide an enabled extension from the list active API extensions API. Given it is no longer possible to disable any API extensions, it makes no sense to have the option to hide the fact an API extension is active. As such, all these policy rules have been removed.
The
nova.virt.libvirt.volume.glusterfs.LibvirtGlusterfsVolumeDriver
volume driver has been removed. The GlusterFS volume driver in Cinder was deprecated during the Newton release and was removed from Cinder in the Ocata release so it is effectively not maintained and therefore no longer supported.The following configuration options, previously found in the
libvirt
group, have been removed:glusterfs_mount_point_base
qemu_allowed_storage_drivers
These were used by the now-removed
LibvirtGlusterfsVolumeDriver
volume driver and therefore no longer had any effect.
The cells topic configuration option has been removed. Please make sure your cells related message queue topic is ‘cells’.
The
nova.virt.libvirt.volume.scality.LibvirtScalityVolumeDriver
volume driver has been removed. The Scality volume driver in Cinder was deprecated during the Newton release and was removed from Cinder in the Ocata release so it is effectively not maintained and therefore no longer supported.
The following configuration options, previously found in the
libvirt
group, have been removed:scality_sofs_config
scality_sofs_mount_point
These were used by the now-removed
LibvirtScalityVolumeDriver
volume driver and therefore no longer had any effect.
Configuration options related to RPC topics were deprecated in the past releases and are now completly removed from nova. There was no need to let users choose the RPC topics for all services. There was little benefit from this and it made it really easy to break Nova by changing the value of topic options.
The following options are removed:
compute_topic
console_topic
consoleauth_topic
scheduler_topic
network_topic
Policy rule with name os_compute_api:os-admin-actions has been removed as it was never used by any API.
The
[vmware] wsdl_location
configuration option has been removed after being deprecated in 15.0.0. It was unused and should have no impact.
Configuration options related to image file have been removed. They were marked as deprecated because the feature to download images from glance via filesystem is not used. Below are the removed options:
image_file_url.filesystems
image_file_url.FS.id
image_file_url.FS.mountpoint
libvirt.num_iscsi_scan_tries
option has been renamed tolibvirt.num_volume_scan_tries
, as the previous name was suggesting that this option only concerns devices connected using iSCSI interface. It also concerns devices connected using fibrechannel, scaleio and disco.
A new request_log middleware is created to log REST HTTP requests even if Nova API is not running under eventlet.wsgi. Because this is an api-paste.ini change, you will need to manually update your api-paste.ini with the one from the release to get this functionality. The new request logs will only emit when it is detected that nova-api is not running under eventlet, and will include the microversion of the request in addition to all the previously logged information.
The
nova-manage api_db sync
andnova-manage db sync
commands previously took an optional--version
parameter to determine which version to sync to. For example:$ nova-manage api_db sync --version some-version
This is now an optional positional argument. For example:
$ nova-manage api_db sync some-version
Aliases are provided but these are marked as deprecated and will be removed in the next release of nova.
The scheduler now requests allocation candidates from the Placement service during scheduling. The allocation candidates information was introduced in the Placement API 1.10 microversion, so you should upgrade the placement service before the Nova scheduler service so that the scheduler can take advantage of the allocation candidate information.
An online data migration has been added to populate the
services.uuid
column in the nova database for non-deleted services records. Listing or showing services out of theos-services
API will have the same effect.
Nova is now configured to use the v3 version of the Cinder API. You need to ensure that the v3 version of the Cinder API is available and listed in the service catalog in order to use Nova with the default configuration option.
The base
3.0
version is identical to v2 and it was introduced in the Newton release of OpenStack. In case you need Nova to continue using the v2 version you can point it towards that by setting thecatalog_info
option in thenova.conf
file under thecinder
section, like:[cinder] catalog_info = volumev2:cinderv2:publicURL
Since we now use Placement to verify basic CPU/RAM/disk resources when using the FilterScheduler, the
RamFilter
andDiskFilter
entries are being removed from the default value for theenabled_filters
config option in the[filter_scheduler]
group. If you are overriding this option, you probably should remove them from your version. If you are using CachingScheduler you may wish to enable these filters as we will not use Placement in that case.
WSGI application scripts
nova-api-wsgi
andnova-metadata-wsgi
are now available. They allow running the compute and metadata APIs using a WSGI server of choice (for example nginx and uwsgi, apache2 with mod_proxy_uwsgi or gunicorn). The eventlet-based servers are still available, but the WSGI options will allow greater deployment flexibility.
지원 종료된 기능 노트¶
TypeAffinityFilter is deprecated for removal in the 17.0.0 Queens release. There is no replacement planned for this filter. It is fundamentally flawed in that it relies on the
flavors.id
primary key and if a flavor “changed”, i.e. deleted and re-created with new values, it will result in this filter thinking it is a different flavor, thus breaking the usefulness of this filter.
The
[api]/allow_instance_snapshots
configuration option is now deprecated for removal. To disable snapshots in thecreateImage
server action API, change theos_compute_api:servers:create_image
andos_compute_api:servers:create_image:allow_volume_backed
policies.
The configuration options
baremetal_enabled_filters
anduse_baremetal_filters
are deprecated in Pike and should only be used if your deployment still contains nodes that have not had their resource_class attribute set. See Ironic release notes for upgrade concerns.
The following scheduler filters are deprecated in Pike:
ExactRamFilter
,ExactCoreFilter
andExactDiskFilter
and should only be used if your deployment still contains nodes that have not had their resource_class attribute set. See Ironic release notes for upgrade concerns.
Cells v1, which includes the
[cells]
configuration options andnova-cells
service, is deprecated in favor of Cells v2. For information on Cells v2, see: https://docs.openstack.org/nova/latest/user/cells.html
[libvirt]/live_migration_progress_timeout
has been deprecated as this feature has been found not to work. See bug 1644248 for more details.
The following options, found in
DEFAULT
, were only used for configuring nova-network and are, like nova-network itself, now deprecated.default_floating_pool
(neutron users should use theneutron.default_floating_pool
)ipv6_backend
firewall_driver
metadata_host
metadata_port
iptables_top_regex
iptables_bottom_regex
iptables_drop_action
ldap_dns_url
ldap_dns_user
ldap_dns_password
ldap_dns_soa_hostmaster
ldap_dns_servers
ldap_dns_base_dn
ldap_dns_soa_refresh
ldap_dns_soa_retry
ldap_dns_soa_expiry
ldap_dns_soa_minimum
dhcpbridge_flagfile
dhcpbridge
dhcp_lease_time
dns_server
use_network_dns_servers
dnsmasq_config_file
ebtables_exec_attempts
ebtables_retry_interval
fake_network
send_arp_for_ha
send_arp_for_ha_count
dmz_cidr
force_snat_range
linuxnet_interface_driver
linuxnet_ovs_integration_bridge
use_single_default_gateway
forward_bridge_interface
ovs_vsctl_timeout
networks_path
public_interface
routing_source_ip
use_ipv6
allow_same_net_traffic
When using neutron polling mode with XenAPI driver, booting a VM will timeout because
nova-compute
cannot receive network-vif-plugged event. This is because it set vif[‘id’](i.e. neutron port uuid) to two different OVS ports. One is XenServer VIF, the other is tap device qvo-XXXX, but setting ‘nicira-iface-id’ to XenServer VIF isn’t correct, so deprecate it.
A number of nova-manage commands have been deprecated. The commands, along with the reasons for their deprecation, are listed below:
account
This allows for the creation, deletion, update and listing of user and project quotas. Operators should use the equivalent resources in the REST API instead.
The
quota_usage_refresh
sub-command has been renamed tonova-manage quota refresh
. This new command should be used instead.agent
This allows for the creation, deletion, update and listing of “agent builds”. Operators should use the equivalent resources in the REST API instead.
host
This allows for the listing of compute hosts. Operators should use the equivalent resources in the REST API instead.
log
This allows for the filtering of errors from nova’s logs and extraction of all logs from syslog. This command has not been actively maintained in a long time, is not tested, and can be achieved using journalctl or by simply grepping through
/var/log
. It will simply be removed.project
This is an alias for account and has been deprecated for the same reasons.
shell
This starts the Python interactive interpreter. It is a clone of the same functionality found in Django’s django-manage command. This command hasn’t been actively maintained in a long time and is not tested. It will simply be removed.
These commands will be removed in their entirety during the Queens cycle.
Nova support for using the Block Storage (Cinder) v2 API is now deprecated and will be removed in the 17.0.0 Queens release. The v3 API is now the default and is backward compatible with the v2 API.
The
[xenserver]/vif_driver
configuration option is deprecated for removal. TheXenAPIOpenVswitchDriver
vif driver is used for Neutron and theXenAPIBridgeDriver
vif driver is used for nova-network, which itself is deprecated. In the future, theuse_neutron
configuration option will be used to determine which vif driver to load.
The
TrustedFilter
scheduler filter has been experimental since its existence on May 18, 2012. Due to the lack of tests and activity with it, it’s now deprecated and set for removal in the 17.0.0 Queens release.
Some unused policies have been deprecated. These are:
os_compute_api:os-server-groups
os_compute_api:flavors
Please note you should remove these from your policy file(s).
Configuration option
wsgi_log_format
is deprecated. This only applies when running nova-api under eventlet, which is no longer the preferred deployment mode.
The following APIs which are considered as proxies of Neutron networking API, are deprecated and will result in a 404 error response in microversion 2.44:
POST /servers/{server_uuid}/action { "addFixedIp": {...} } POST /servers/{server_uuid}/action { "removeFixedIp": {...} } POST /servers/{server_uuid}/action { "addFloatingIp": {...} } POST /servers/{server_uuid}/action { "removeFloatingIp": {...} }
Those server actions can be replaced by calling the Neutron API directly.
The nova-network specific API to query the server’s interfaces is deprecated:
GET /servers/{server_uuid}/os-virtual-interfaces
To query attached neutron interfaces for a specific server, the API GET /servers/{server_uuid}/os-interface can be used.
Scheduling bare metal (ironic) instances using standard resource classes (VCPU, memory, disk) is deprecated and will no longer be supported in Queens. Custom resource classes should be used instead. Please refer to the ironic documentation for a detailed explanation.
The
os-hosts
API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use theos-hypervisors
API. To enable or disable a service, use theos-services
API. There is no replacement for the shutdown, startup, reboot, or maintenance_mode actions as those are system-level operations which should be outside of the control of the compute service.
The
nova-manage quota refresh
command has been deprecated and is now a no-op since quota usage is counted from resources instead of being tracked separately. The command will be removed during the Queens cycle.
The
--version
parameters of thenova-manage api_db sync
andnova-manage db sync
commands has been deprecated in favor of positional arguments.
The CachingScheduler and ChanceScheduler drivers are deprecated in Pike. These are not integrated with the placement service, and their primary purpose (speed over correctness) should be addressed by the default FilterScheduler going forward. If ChanceScheduler behavior is desired (i.e. speed trumps correctness) then configuring the FilterScheduler with no enabled filters should approximate that behavior.
보안 이슈¶
[CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets
버그 수정¶
In the 2.50 microversion, the following fields are added to the
GET /os-quota-class-sets
andPUT /os-quota-class-sets/{id}
API response:server_groups
server_group_members
And the following fields are removed from the same APIs in the same microversion:
fixed_ips
floating_ips
security_groups
security_group_rules
networks
The
POST
andDELETE
operations on theos-assisted-volume-snapshots
API will now fail with a 400 error if the related instance is undergoing a task state transition or does not have a host, i.e. is shelved offloaded.
Fixes bug 1662699 which was a regression in the v2.1 API from the
block_device_mapping_v2.boot_index
validation that was performed in the legacy v2 API. With this fix, requests to create a server withboot_index=None
will be treated as ifboot_index
was not specified, which defaults to meaning a non-bootable block device.
Fixes bug 1670522 which was a regression in the 15.0.0 Ocata release. For compute nodes running the libvirt driver with
virt_type
not set to “kvm” or “qemu”, i.e. “xen”, creating servers will fail by default if libvirt >= 1.3.3 and QEMU >= 2.7.0 without this fix.
Includes the fix for bug 1673613 which could cause issues when upgrading and running
nova-manage cell_v2 simple_cell_setup
ornova-manage cell_v2 map_cell0
where the database connection is read from config and has special characters in the URL.
Fixes bug 1691545 in which there was a significant increase in database connections because of the way connections to cell databases were being established. With this fix, objects related to database connections are cached in the API service and reused to prevent new connections being established for every communication with cell databases.
Correctly allow the use of a custom scheduler driver by using the name of the custom driver entry point in the
[scheduler]/driver
config option. You must also update the entry point insetup.cfg
.
The i/o performance for Quobyte volumes has been increased significantly by disabling xattrs.
The ironic virt driver no longer reports an empty inventory for bare metal nodes that have instances on them. Instead the custom resource class, VCPU, memory and disk are reported as they are configured on the node.
API calls to
/os-quota-sets
and flavor access will now attempt to validate the project_id being operated on with Keystone. If the user token has enough permissions to performGET /v3/projects/{project_id}
, and the Keystone project does not exist, a 400 BadRequest will be returned to prevent invalid project data from being put in the Nova database. This fixes an effective silent error where the project_id would be stored even if it was not a valid project_id in the system.
Fixes bug 1581230 by removing the internal
check_attach
call from the Nova code as it can cause race conditions and the checks are handled byreserve_volume
in Cinder.reserve_volume
is called in every volume attach scenario to provide the necessary checks and volume state validation on the Cinder side.
Physical network name will be retrieved from a multi-segement network. The current implementation will retrieve the physical network name for the first segment that provides it. This is mostly intended to support a combinatin of vxlan and vlan segments. Additional work will be required to support a case of multiple vlan segments associated with different physical networks.
기타 기능¶
instance.shutdown.end
versioned notification will have an emptyip_addresses
field since the network resources associated with the instance are deallocated before this notification is sent, which is actually more accurate. Consumers should rely on the instance.shutdown.start notification if they need the network information for the instance when it is being deleted.
The
PUT /os-services/disable
,PUT /os-services/enable
andPUT /os-services/force-down
APIs to enable, disable, or force-down a service will now only work with nova-compute services. If you are using those APIs to try and disable a non-compute service, like nova-scheduler or nova-conductor, those APIs will result in a 404 response.There really never was a good reason to disable or enable non-compute services anyway since that would not do anything. The nova-scheduler and nova-api services are checking the
status
andforced_down
fields to see if instance builds can be scheduled to a compute host or if instances can be evacuated from a downed compute host. There is nothing that relies on a disabled or downed nova-conductor or nova-scheduler service.
The
[DEFAULT]/enable_new_services
configuration option will now only be used to auto-disable new nova-compute services. Other services like nova-conductor, nova-scheduler and nova-osapi_compute will not be auto-disabled since disabling them does nothing functionally, and starting in Pike thePUT /os-services/enable
REST API will not be able to find non-compute services to enable them.
The 2.45 microversion is introduced which changes the response for the
createImage
andcreateBackup
server action APIs to no longer return aLocation
response header. With microversion 2.45 those APIs now return a json dict in the response body with a singleimage_id
key whose value is the snapshot image ID (a uuid). The oldLocation
header in the response before microversion 2.45 is most likely broken and inaccessible by end users since it relies on the internal Glance API server configuration and does not take into account Glance API versions.
The Placement API can be set to connect to a specific keystone endpoint interface using the
os_interface
option in the[placement]
section insidenova.conf
. This value is not required but can be used if a non-default endpoint interface is desired for connecting to the Placement service. By default, keystoneauth will connect to the “public” endpoint.
The filter scheduler will now attempt to claim a number of resources in the placement API after determining a list of potential hosts. We attempt to claim these resources for each instance in the build request, and if a claim does not succeed, we try this claim against the next potential host the scheduler selected. This claim retry process can potentially attempt claims against a large number of hosts, and we do not limit the number of hosts to attempt claims against. Claims for resources may fail due to another scheduler process concurrently claiming resources against the same compute node. This concurrent resource claim is normal and the retry of a claim request should be unusual but harmless.
With XenAPI driver, we have deprecated bittorrent since ‘15.0.0’, so we decide to remove all bittorrent related files and unit tests.
By removing the
check_attach
internal call from Nova, small behavioral changes were introduced.reserve_volume
call was added to the boot from volume scenario. In case a failure occurs while building the instance, the instance goes into ERROR state while the volume stays inattaching
state. The volume state will be set back toavailable
when the instance gets deleted.Additional availability zone check is added to the volume attach flow, which results in an availability zone check when an instance gets unshelved. In case the deployment is not sensitive to availability zones and not using the AvailabilityZoneFilter scheduler filter the current default settings (cross_az_attach=True) are allowing to perform unshelve the same way as before this change without additional configuration.
The disabled
os-pci
API has been removed. This API was originally added to the v3 API which over time finally became the v2.1 API and the initial microversion is backward compatible with the v2.0 API, where theos-pci
extension did not exist. Theos-pci
API was never enabled as a microversion in the v2.1 API and at this time no longer aligns with Nova strategically and is therefore just technical debt, so it has been removed. Since it was never enabled or exposed out of the compute REST API endpoint there was no deprecation period for this.