Victoria Series Release Notes¶
4.0.0-5¶
Bug Fixes¶
Bump hacking version to 4.0.0 to resolve the pyflakes version conflict occuring on stable/victoria. We usually do not bump the hacking version in stable branch but this is required to resolve the version conflict and due to hacking new version checks we need to change code also.
3.0.0¶
Upgrade Notes¶
Python 2.7 support has been dropped. The minimum version of Python now supported by placement is Python 3.6.
Bug Fixes¶
When a single resource provider receives many concurrent allocation writes, retries may be performed server side when there is a resource provider generation conflict. When those retries are all consumed, the client receives an HTTP 409 response and may choose to try the request again.
In an environment where high levels of concurrent allocation writes are common, such as a busy clustered hypervisor, the default retry count may be too low. See story 2006467
A new configuation setting,
[placement]/allocation_conflict_retry_count
, has been added to address this situation. It defines the number of times to retry, server-side, writing allocations when there is a resource provider generation conflict.
2.0.0.0rc1¶
Prelude¶
The 2.0.0 release of placement is the first release where placement is available solely from its own project and must be installed separately from nova. If the extracted placement is not already in use, prior to upgrading to Train, the Stein version of placement must be installed. See Upgrading from Nova to Placement for details.
2.0.0 adds a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud.
Also added is support for forbidden aggregates which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads.
Extensive benchmarking and profiling have led to massive performance enhancements, especially in environments with large numbers of resource providers and high concurrency.
New Features¶
In microversion 1.34 the body of the response to a
GET /allocation_candidates
request has been extended to include amappings
field with each allocation request. The value is a dictionary associating request group suffixes with the uuids of those resource providers that satisfy the identified request group. For convenience, this mapping can be included in the request payload forPOST /allocations
,PUT /allocations/{consumer_uuid}
, andPOST /reshaper
, but it will be ignored.
From microversion
1.36
, a newsame_subtree
queryparam onGET /allocation_candidates
is supported. It accepts a comma-separated list of request group suffix strings ($S). Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a resources$S. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. Thesame_subtree
query parameter can be repeated and each repeated group is treated independently.
Microversion 1.35 adds support for the
root_required
query parameter to theGET /allocation_candidates
API. It accepts a comma-delimited list of trait names, each optionally prefixed with!
to indicate a forbidden trait, in the same format as therequired
query parameter. This restricts allocation requests in the response to only those whose (non-sharing) tree’s root resource provider satisfies the specified trait requirements.
In microversion 1.33, the syntax for granular groupings of resource, required/forbidden trait, and aggregate association requests introduced in 1.25 has been extended to allow, in addition to numbers, strings from 1 to 64 characters in length consisting of a-z, A-Z, 0-9,
_
, and-
. This is done to allow naming conventions (e.g.,resources_COMPUTE
andresources_NETWORK
) to emerge in situations where multiple services are collaborating to make requests.
Add support for forbidden aggregates in
member_of
queryparam inGET /resource_providers
andGET /allocation_candidates
. Forbidden aggregates are prefixed with a!
from microversion1.32
.This negative expression can also be used in multiple
member_of
parameters:?member_of=in:<agg1>,<agg2>&member_of=<agg3>&member_of=!<agg4>
would translate logically to
“Candidate resource providers must be at least one of agg1 or agg2, definitely in agg3 and definitely not in agg4.”
We do NOT support
!
within thein:
list:?member_of=in:<agg1>,<agg2>,!<agg3>
but we support
!in:
prefix:?member_of=!in:<agg1>,<agg2>,<agg3>
which is equivalent to:
?member_of=!<agg1>&member_of=!<agg2>&member_of=!<agg3>
where returned resource providers must not be in agg1, agg2, or agg3.
Specifying forbidden aggregates in granular requests,
member_of<N>
is also supported from the same microversion,1.32
.
Upgrade Notes¶
The
Missing Root Provider IDs
upgrade check in theplacement-status upgrade check
command will now produce a failure if it detects anyresource_providers
records with a nullroot_provider_id
value. Run theplacement-manage db online_data_migrations
command to heal these types of records.
Deprecation Notes¶
The
[placement]/policy_file
configuration option is deprecated and its usage is being replaced with the more standard[oslo_policy]/policy_file
option. If you do not override policy with custom rules you will have nothing to do. If you do override the placement default policy then you will need to update your configuration to use the[oslo_policy]/policy_file
option. By default, the[oslo_policy]/policy_file
option will be used if the file it points at exists.
Bug Fixes¶
By fixing bug story/2005842 the OSProfiler support works again in the placement WSGI.
Limiting nested resource providers with the
limit=N
query parameter when callingGET /allocation_candidates
could result in incomplete provider summaries. This is now fixed so that all resource providers that are in the same trees as any provider mentioned in the limited allocation requests are shown in the provider summaries collection. For more information see story/2005859.
1.0.0.0rc1¶
Prelude¶
The 1.0.0 release of Placement is the first release where the Placement code is hosted in its own repository and managed as its own OpenStack project. Because of this, the majority of changes are not user-facing. There are a small number of new features (including microversion 1.31) and bug fixes, listed below.
A new document, Upgrading from Nova to Placement, has been created. It
explains the steps required to upgrade to extracted Placement from Nova and
to migrate data from the nova_api
database to the
placement_database
.
New Features¶
Add support for the
in_tree
query parameter to theGET /allocation_candidates
API. It accepts a UUID for a resource provider. If this parameter is provided, the only resource providers returned will be those in the same tree with the given resource provider. The numbered syntaxin_tree<N>
is also supported. This restricts providers satisfying the Nth granular request group to the tree of the specified provider. This may be redundant with otherin_tree<N>
values specified in other groups (including the unnumbered group). However, it can be useful in cases where a specific resource (e.g. DISK_GB) needs to come from a specific sharing provider (e.g. shared storage).For example, a request for
VCPU
andVGPU
resources frommyhost
andDISK_GB
resources fromsharing1
might look like:?resources=VCPU:1&in_tree=<myhost_uuid> &resources1=VGPU:1&in_tree1=<myhost_uuid> &resources2=DISK_GB:100&in_tree2=<sharing1_uuid>
A configuration setting
[placement_database]/sync_on_startup
is added which, if set toTrue
, will cause database schema migrations to be called when the placement web application is started. This avoids the need to callplacement-manage db sync
separately.To preserve backwards compatibility and avoid unexpected changes, the default of the setting is
False
.
A new online data migration has been added to populate missing
root_provider_id
in the resource_providers table. This can be run during the normal placement-manage db online_data_migrations routine. See the Bug#1803925 for more details.
Upgrade Notes¶
An upgrade check was added to the
placement-status upgrade check
command for incomplete consumers which can be remedied by running theplacement-manage db online_data_migrations
command.
0.1.0¶
Upgrade Notes¶
A
placement-status upgrade check
command is added which can be used to check the readiness of a placement deployment before initiating an upgrade.
Bug Fixes¶
Previously, when an aggregate was specified by the
member_of
query parameter in theGET /allocation_candidates
operation, the non-root providers in the aggregate were excluded unless their root provider was also in the aggregate. With this release, the non-root providers directly associated with the aggregate are also considered. See the Bug#1792503 for details.