Note
These are significant changes reported directly from the project teams and have not been processed in any way. Some highlights may be more significant than others. Please do not take this list as a definitive set of highlights for the release until the Open Infrastructure Foundation marketing staff have had a chance to compile a more accurate message out of these changes.
Notes:
Cyborg services have migrated to native Python threading, modernising the service architecture and positioning the project for long-term sustainability within the OpenStack ecosystem.
The cyborg-agent now validates its Placement resource provider at startup with automatic retry and backoff, and supports explicit resource provider naming to simplify deployments where accelerator and compute hostnames differ.
Cyborg has adopted current oslo.db database APIs, aligning the project’s data layer with the wider OpenStack ecosystem and ensuring continued long-term supportability.
A new driver configuration guide covers all supported accelerator types — FPGA, GPU, NIC, QAT, SSD, and PCI passthrough — providing ready-to-use configuration examples for common deployments.
Notes:
Designate completed the migration from eventlet to native Python threading across all services as part of the broader OpenStack de-eventlet effort.
Notes:
Freezer API now defaults to SQLAlchemy as the storage driver. Elasticsearch DB storage is deprecated and will be removed in the next release.
Freezer UI was moved in Horizon from it’s own “Disaster Recovery” dashboard under “Backup and Recovery” section inside of the “Project” dashboard.
Deprecated Python clients for interaction with services has been replaced with openstacksdk.
A naming of temporary resources created by Freezer for the purpose of backup/restoration process have been aligned and significantly refactored, to allow simpler tracing of the resource source and purpuse.
Notes:
Horizon now supports Nova live migration with microversion 2.30.
New configuration option is added to avoid doing full listing of containers and objects in the Swift panel that can cause high resource consumption in Horizon.
Key Pairs page is rewritten in Python/Django keeping full functionality of the AngularJS implementation.
Integration tests for region selection and switching are added.
Multi-realm federation tests are added.
Old integration tests are removed.
Horizon now uses py313 jobs for Django check and gate queue.
Horizon now works with Selenium 4.41.0.
Horizon now works with Font Awesome 6.2.
Horizon’s XStatic dependencies are repackaged to work with XStatic 2.0.0.
Notes:
Ironic now supports NFS and CIFS/SMB transport protocols for Redfish Virtual Media boot with automatic protocol detection from BMC capabilities. Operators can configure shares via new [nfs] and [cifs] configuration sections, with per-node overrides available through driver_info.
Two new deploy interfaces have been added. The autodetect deploy interface eliminates manual specification by selecting the most suitable concrete interface based on image metadata and node configuration. The noop deploy interface allows nodes to be marked active without performing OS deployment, enabling adoption of pre-existing deployments and tracking of externally managed nodes in Ironic’s inventory.
A new standalone networking service enables Ironic to manage physical network switch configurations for bare metal nodes without requiring Neutron. Running independently from the main conductor, this service enables network management for standalone Ironic deployments.
The networking-generic-switch and networking-baremetal projects now support VXLAN and Geneve overlay networks for bare metal nodes. Networking-baremetal facilitates the attachment with OVN, and pairs with netowrking-generic-switch to facilitate VXLAN VNI attachments. See documentation for these projects for more detailed information.
Trait-based port scheduling enables more flexible and automated network configuration, allowing ports to be scheduled based on traits and physical network attributes, improving multi-network bare metal deployments.
Redfish inspection has reached parity with in-band agent inspection by gaining the ability to inspect PCI bus, disk controllers, LLDP, and more system and NIC details.
Notes:
A new environment variable named OS_MANILA_DISABLE_EVENTLET_PATCHING was introduced to allow running the manila scheduler, data and share services in native threading mode instead of eventlet’s green threads. The manila-api service is unaffected when deployed behind an external WSGI server. This option is being introduced as a technology preview. A future release of Manila will remove eventlet entirely and rely on native threads. We do not recommend using this in production environments yet.
Users can now set and unset metadata on share replicas. Such metadata can be used by the storage driver to determine replication strategies.
Introduced support for QoS type and QoS type specs. Administrators can now define performance limits such as throughput or IOPS throttling using either share type extra-specs or the new dedicated QoS type entities.
The Dell PowerScale driver now offers supports for deduplication, manage, unmanage, shrinking shares and mounting snapshots.
Users can now specify a custom export location when managing a share. This ensures that the share retains a predictable mount path after being adopted by Manila.
A back end driver for HPE Alletra MP B10000 is now available.
The NetApp ONTAP driver now supports synchronous replication policies and aggregate level encryption.
It is now possible to boot the the generic driver’s service instances from a cinder volume in DHSS=True mode using either an image or a cinder volume.
The Manila V1 API has been removed.
The Manila shell utility has been removed.
The Manila UI now supports manipulating metadata for share snapshots, share networks and export locations.
Notes:
Network IP availability details API extension was added which provides more information on IP usage in subnets and allocation pools.
OVN BGP capabilities have been integrated into the Neutron OVN driver.
Additional OVN config options were added to better support scalability.
ML2/OVN now supports North/South routing for extenal (SR-IOV, baremetal) ports.
ML2/OVN now supports allowed address pairs with virtual MAC addresses.
Notes:
Nova now supports parallel live migrations via a new
[libvirt] live_migration_parallel_connections config option,
enabling multiple connections for memory transfer during live migration
to improve speed.
Nova now enables one IOThread per QEMU instance by default, offloading disk I/O processing from vCPU threads to improve performance. For real-time instances, the IOThread is pinned to the same cores as the emulator thread to avoid impacting dedicated vCPUs.
Nova now supports live migration of instances with vTPM devices when
using host secret security mode. A new hw:tpm_secret_security extra
spec allows operators to select this mode, where the TPM secret is
persisted in libvirt and transferred over RPC to the destination during
migration. Instance owners can resize existing legacy vTPM instances to
a flavor with hw:tpm_secret_security=host to opt in to live
migration. Note that this resize must be performed by the instance owner
due to Barbican secret ownership constraints, unless the admin has been
granted appropriate ACLs in Barbican. The legacy user secret security
mode does not yet support live migration but may do so in a future
release once additional deployment and API changes are completed.
Starting from microversion 2.101, the volume-attach API is now asynchronous, returning HTTP 202 instead of blocking until completion. This reduces API response time by offloading the operation to nova-conductor.
Nova’s libvirt driver now delegates UEFI firmware selection to libvirt instead of handling it internally. Libvirt’s built-in auto-selection picks the best firmware file based on requested features (including Secure Boot and AMD SEV), and supports additional firmware types like ROM.
Nova now has full OpenAPI schema coverage, with JSON Schema for request and response bodies across all API endpoints, enabling future auto-generation of OpenAPI specifications.
Experimental feature: Nova services now supports graceful shutdown (part 1 of a larger effort). A second RPC server is introduced in compute service to handle in-progress operations during shutdown. The configurable timeouts controls how long the service waits for ongoing tasks to complete before fully stopping, preventing operations from being left in an unrecoverable state.
Experimental feature: Nova services can run in native threading mode as an alternative to eventlet. Please try it in non-production environment and share your success or failure with us on the openstack-discuss mailing list or via the Nova bug tracker.
Notes:
Ansible Core version has been upgraded to 2.19 release.
Implemented usage of OpenBao (Hashicorp Vault) as a PKI provider for TLS self-signed certificates.
Group names defenitions were changed to contain only underscore as a separator. This was made in order to conform with group naming convention.
Added ability to use uv as a Python package installer into
virtualenvs. Wheels build and virtalenv creation is still handled
in “old” fashion.
Notes:
Zone migration strategies now have enhanced testing enabled, supporting cross-zone instance movements for improved workload distribution and high availability scenarios.
ActionPlan cancellation behavior has been standardized across threading and eventlet modes, ensuring consistent and predictable handling of in-flight operations during maintenance activities.
The API, Decision Engine, and Applier now all support native threading as an alternative to Eventlet. This experimental native threading mode, enables better operational visibility and health tracking of Watcher’s execution components.
Nova client integration has been modernized with wrapper classes that provide cleaner interfaces and handle OpenStack extension attributes transparently. Finally, the migration to openstacksdk is now complete.
A new automatic skipping mechanism has been included into all Watcher action types. This feature identifies non-viable execution conditions upfront, marking actions as skipped to prevent unnecessary ‘Failed’ actions and ensure more accurate reporting of optimization results.
Watcher now supports deploying multiple instances in active-active configurations for both the Decision Engine and the Applier services. This enhancement is underpinned by the introduction of dedicated service monitors designed to track the health of each instance and trigger automated recovery workflows, such as re-queuing pending audits or cancelling stale action plans, to ensure continuous optimization without manual intervention.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.