The following is an overview of all available configuration options in Nova. For a sample configuration file, refer to Sample Configuration File.
internal_service_availability_zone
¶Type: | string |
---|---|
Default: | internal |
Availability zone for internal services.
This option determines the availability zone for the various internal nova services, such as 'nova-scheduler', 'nova-conductor', etc.
Possible values:
default_availability_zone
¶Type: | string |
---|---|
Default: | nova |
Default availability zone for compute services.
This option determines the default availability zone for 'nova-compute' services, which will be used if the service(s) do not belong to aggregates with availability zone metadata.
Possible values:
default_schedule_zone
¶Type: | string |
---|---|
Default: | <None> |
Default availability zone for instances.
This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime.
Possible values:
password_length
¶Type: | integer |
---|---|
Default: | 12 |
Minimum Value: | 0 |
Length of generated instance admin passwords.
instance_usage_audit_period
¶Type: | string |
---|---|
Default: | month |
Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset.
Possible values:
hour
, day
, month` or ``year
month@15
will result in monthly audits
starting on 15th day of month.use_rootwrap_daemon
¶Type: | boolean |
---|---|
Default: | false |
Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.
rootwrap_config
¶Type: | string |
---|---|
Default: | /etc/nova/rootwrap.conf |
Path to the rootwrap configuration file.
Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry.
tempdir
¶Type: | string |
---|---|
Default: | <None> |
Explicitly specify the temporary working directory.
monkey_patch
¶Type: | boolean |
---|---|
Default: | false |
Determine if monkey patching should be applied.
Related options:
monkey_patch_modules
: This must have values set for this option to
have any effectWarning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | Monkey patching nova is not tested, not supported, and is a barrier for interoperability. |
---|
monkey_patch_modules
¶Type: | list |
---|---|
Default: | nova.compute.api:nova.notifications.notify_decorator |
List of modules/decorators to monkey patch.
This option allows you to patch a decorator for all functions in specified modules.
Possible values:
Related options:
monkey_patch
: This must be set to True
for this option to
have any effectWarning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | Monkey patching nova is not tested, not supported, and is a barrier for interoperability. |
---|
compute_driver
¶Type: | string |
---|---|
Default: | <None> |
Defines which driver to use for controlling virtualization.
Possible values:
libvirt.LibvirtDriver
xenapi.XenAPIDriver
fake.FakeDriver
ironic.IronicDriver
vmwareapi.VMwareVCDriver
hyperv.HyperVDriver
powervm.PowerVMDriver
allow_resize_to_same_host
¶Type: | boolean |
---|---|
Default: | false |
Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize.
non_inheritable_image_properties
¶Type: | list |
---|---|
Default: | cache_in_nova,bittorrent,img_signature_hash_method,img_signature,img_signature_key_type,img_signature_certificate_uuid |
Image properties that should not be inherited from the instance when taking a snapshot.
This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.
Possible values:
multi_instance_display_name_template
¶Type: | string |
---|---|
Default: | %(name)s-%(count)d |
When creating multiple instances with a single request using the os-multiple-create API extension, this template will be used to build the display name for each instance. The benefit is that the instances end up with different hostnames. Example display names when creating two VM's: name-1, name-2.
Possible values:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | This config changes API behaviour. All changes in API behaviour should be discoverable. |
---|
max_local_block_devices
¶Type: | integer |
---|---|
Default: | 3 |
Maximum number of devices that will result in a local image being created on the hypervisor node.
A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of --image being used, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail.
Possible values:
compute_monitors
¶Type: | list |
---|---|
Default: | '' |
A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility.
NOTE: Only one monitor per namespace (For example: cpu) can be loaded at a time.
Possible values:
An empty list will disable the feature (Default).
An example value that would enable both the CPU and NUMA memory bandwidth monitors that use the virt driver variant:
compute_monitors = cpu.virt_driver, numa_mem_bw.virt_driver
default_ephemeral_format
¶Type: | string |
---|---|
Default: | <None> |
The default format an ephemeral_volume will be formatted with on creation.
Possible values:
ext2
ext3
ext4
xfs
ntfs
(only for Windows guests)vif_plugging_is_fatal
¶Type: | boolean |
---|---|
Default: | true |
Determine if instance should boot or fail on VIF plugging timeout.
Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval.
This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready.
Possible values:
vif_plugging_timeout
¶Type: | integer |
---|---|
Default: | 300 |
Minimum Value: | 0 |
Timeout for Neutron VIF plugging event message arrival.
Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see 'vif_plugging_is_fatal').
Related options:
vif_plugging_timeout
is set to zero and
vif_plugging_is_fatal
is False, events should not be expected to
arrive at all.injected_network_template
¶Type: | string |
---|---|
Default: | $pybasedir/nova/virt/interfaces.template |
Path to '/etc/network/interfaces' template.
The path to a template file for the '/etc/network/interfaces'-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server.
The template will be rendered using Jinja2 template engine, and receive a
top-level key called interfaces
. This key will contain a list of
dictionaries, one for each interface.
Refer to the cloudinit documentaion for more information:
Possible values:
Related options:
flat_inject
: This must be set to True
to ensure nova embeds network
configuration information in the metadata provided through the config drive.preallocate_images
¶Type: | string |
---|---|
Default: | none |
Valid Values: | none, space |
The image preallocation mode to use.
Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn't available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation.
Possible values:
use_cow_images
¶Type: | boolean |
---|---|
Default: | true |
Enable use of copy-on-write (cow) images.
QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used.
force_raw_images
¶Type: | boolean |
---|---|
Default: | true |
Force conversion of backing images to raw format.
Possible values:
Related options:
compute_driver
: Only the libvirt driver uses this option.virt_mkfs
¶Type: | multi-valued |
---|---|
Default: | '' |
Name of the mkfs commands for ephemeral device.
The format is <os_type>=<mkfs command>
resize_fs_using_block_device
¶Type: | boolean |
---|---|
Default: | false |
Enable resizing of filesystems via a block device.
If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).
timeout_nbd
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Amount of time, in seconds, to wait for NBD device start up.
image_cache_subdirectory_name
¶Type: | string |
---|---|
Default: | _base |
Location of cached images.
This is NOT the full path - just a folder name relative to '$instances_path'. For per-compute-host cached images, set to '_base_$my_ip'
remove_unused_base_images
¶Type: | boolean |
---|---|
Default: | true |
Should unused base images be removed?
remove_unused_original_minimum_age_seconds
¶Type: | integer |
---|---|
Default: | 86400 |
Unused unresized base images younger than this will not be removed.
pointer_model
¶Type: | string |
---|---|
Default: | usbtablet |
Valid Values: | <None>, ps2mouse, usbtablet |
Generic property to specify the pointer type.
Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.
If set, the 'hw_pointer_model' image property takes precedence over this configuration option.
Possible values:
Related options:
vcpu_pin_set
¶Type: | string |
---|---|
Default: | <None> |
Defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs).
Possible values:
A comma-separated list of physical CPU numbers that virtual CPUs can be allocated to by default. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:
vcpu_pin_set = "4-12,^8,15"
reserved_huge_pages
¶Type: | unknown type |
---|---|
Default: | <None> |
Number of huge/large memory pages to reserved per NUMA host cell.
Possible values:
A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved.
reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1
In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.
reserved_host_disk_mb
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host.
Possible values:
reserved_host_memory_mb
¶Type: | integer |
---|---|
Default: | 512 |
Minimum Value: | 0 |
Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host.
Possible values:
reserved_host_cpus
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Number of physical CPUs to reserve for the host. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host CPU from being considered as available, this option is used to reserve random pCPU(s) for the host.
Possible values:
cpu_allocation_ratio
¶Type: | floating point |
---|---|
Default: | 0.0 |
Minimum Value: | 0.0 |
This option helps you specify virtual CPU to physical CPU allocation ratio.
From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the CoreFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the CoreFilter.
This configuration specifies ratio for CoreFilter which can be set per compute node. For AggregateCoreFilter, it will fall back to this configuration value if no per-aggregate setting is found.
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 16.0. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 16.0.
NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
Possible values:
ram_allocation_ratio
¶Type: | floating point |
---|---|
Default: | 0.0 |
Minimum Value: | 0.0 |
This option helps you specify virtual RAM to physical RAM allocation ratio.
From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the RamFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the RamFilter.
This configuration specifies ratio for RamFilter which can be set per compute node. For AggregateRamFilter, it will fall back to this configuration value if no per-aggregate setting found.
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.5. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 1.5.
NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
Possible values:
disk_allocation_ratio
¶Type: | floating point |
---|---|
Default: | 0.0 |
Minimum Value: | 0.0 |
This option helps you specify virtual disk to physical disk allocation ratio.
From Ocata (15.0.0) this is used to influence the hosts selected by the Placement API. Note that when Placement is used, the DiskFilter is redundant, because the Placement API will have already filtered out hosts that would have failed the DiskFilter.
A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.0. Once set to a non-default value, it is not possible to "unset" the config to get back to the default behavior. If you want to reset back to the default, explicitly specify 1.0.
NOTE: As of the 16.0.0 Pike release, this configuration option is ignored for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
Possible values:
console_host
¶Type: | string |
---|---|
Default: | <current_hostname> |
Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host.
Possible values:
default_access_ip_network_name
¶Type: | string |
---|---|
Default: | <None> |
Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen.
Possible values:
defer_iptables_apply
¶Type: | boolean |
---|---|
Default: | false |
Whether to batch up the application of IPTables rules during a host restart and apply all at the end of the init phase.
instances_path
¶Type: | string |
---|---|
Default: | $state_path/instances |
Specifies where instances are stored on the hypervisor's disk. It can point to locally attached storage or a directory on NFS.
Possible values:
Related options:
[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
instance_usage_audit
¶Type: | boolean |
---|---|
Default: | false |
This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service.
live_migration_retry_count
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 0 |
Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables.
Possible values:
resume_guests_state_on_host_boot
¶Type: | boolean |
---|---|
Default: | false |
This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts.
network_allocate_retries
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails.
Possible values:
max_concurrent_builds
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node.
Possible Values:
max_concurrent_live_migrations
¶Type: | integer |
---|---|
Default: | 1 |
Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment.
Possible values:
block_device_allocate_retries
¶Type: | integer |
---|---|
Default: | 60 |
Number of times to retry block device allocation on failures. Starting with Liberty, Cinder can use image volume cache. This may help with block device allocation performance. Look at the cinder image_volume_cache_enabled configuration option.
Possible values:
sync_power_state_pool_size
¶Type: | integer |
---|---|
Default: | 1000 |
Number of greenthreads available for use to sync power states.
This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic.
Possible values:
image_cache_manager_interval
¶Type: | integer |
---|---|
Default: | 2400 |
Minimum Value: | -1 |
Number of seconds to wait between runs of the image cache manager.
Possible values: * 0: run at the default rate. * -1: disable * Any other value
bandwidth_poll_interval
¶Type: | integer |
---|---|
Default: | 600 |
Interval to pull network bandwidth usage info.
Not supported on all hypervisors. If a hypervisor doesn't support bandwidth usage, it will not get the info in the usage events.
Possible values:
sync_power_state_interval
¶Type: | integer |
---|---|
Default: | 600 |
Interval to sync power states between the database and the hypervisor.
The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state.
Possible values:
Related options:
handle_virt_lifecycle_events
in workarounds_group is
false and this option is negative, then instances that get out
of sync between the hypervisor and the Nova database will have
to be synchronized manually.heal_instance_info_cache_interval
¶Type: | integer |
---|---|
Default: | 60 |
Interval between instance network information cache updates.
Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it's cache if this option is set to 0. If we don't update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0.
Possible values:
reclaim_instance_interval
¶Type: | integer |
---|---|
Default: | 0 |
Interval for reclaiming deleted instances.
A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it's too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically.
Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node.
Possible values:
volume_usage_poll_interval
¶Type: | integer |
---|---|
Default: | 0 |
Interval for gathering volume usages.
This option updates the volume usage cache for every volume_usage_poll_interval number of seconds.
Possible values:
shelved_poll_interval
¶Type: | integer |
---|---|
Default: | 3600 |
Interval for polling shelved instances to offload.
The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the 'shelved_offload_time' config value it offloads the shelved instances. Check 'shelved_offload_time' config option description for details.
Possible values:
Related options:
shelved_offload_time
shelved_offload_time
¶Type: | integer |
---|---|
Default: | 0 |
Time before a shelved instance is eligible for removal from a host.
By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes.
Possible values:
instance_delete_interval
¶Type: | integer |
---|---|
Default: | 300 |
Interval for retrying failed instance file deletes.
This option depends on 'maximum_instance_delete_attempts'. This option specifies how often to retry deletes whereas 'maximum_instance_delete_attempts' specifies the maximum number of retry attempts that can be made.
Possible values:
Related options:
maximum_instance_delete_attempts
from instance_cleaning_opts
group.block_device_allocate_retries_interval
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 0 |
Interval (in seconds) between block device allocation retries on failures.
This option allows the user to specify the time interval between consecutive retries. 'block_device_allocate_retries' option specifies the maximum number of retries.
Possible values:
Related options:
block_device_allocate_retries
in compute_manager_opts group.scheduler_instance_sync_interval
¶Type: | integer |
---|---|
Default: | 120 |
Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova.
If the CONF option 'scheduler_tracks_instance_changes' is False, the sync calls will not be made. So, changing this option will have no effect.
If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently.
Possible values:
Related options:
scheduler_tracks_instance_changes
is set to False.update_resources_interval
¶Type: | integer |
---|---|
Default: | 0 |
Interval for updating compute resources.
This option specifies how often the update_available_resources periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.
Possible values:
reboot_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Time interval after which an instance is hard rebooted automatically.
When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds.
Possible values:
instance_build_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Maximum time in seconds that an instance can take to build.
If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period.
Possible values:
rescue_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Interval to wait before un-rescuing an instance stuck in RESCUE.
Possible values:
resize_confirm_window
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Automatically confirm resizes after N seconds.
Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time.
Possible values:
shutdown_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 1 |
Total time to wait in seconds for an instance toperform a clean shutdown.
It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds.
The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.
Possible values:
running_deleted_instance_action
¶Type: | string |
---|---|
Default: | reap |
Valid Values: | noop, log, shutdown, reap |
The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified.
Possible values:
Related options:
running_deleted_instance_poll_interval
¶Type: | integer |
---|---|
Default: | 1800 |
Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set.
Possible values:
Related options:
running_deleted_instance_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup.
Possible values:
Related options:
maximum_instance_delete_attempts
¶Type: | integer |
---|---|
Default: | 5 |
The number of times to attempt to reap an instance's files.
This option specifies the maximum number of retry attempts that can be made.
Possible values:
instance_delete_interval
to disable the delete attempts.Related options:
* instance_delete_interval
in interval_opts group can be used to disable
this option.
osapi_compute_unique_server_name_scope
¶Type: | string |
---|---|
Default: | '' |
Valid Values: | '', project, global |
Sets the scope of the check for unique instance names.
The default doesn't check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ''InstanceExists'' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don't have to distinguish among instances with the same name by their IDs.
Possible values:
enable_new_services
¶Type: | boolean |
---|---|
Default: | true |
Enable new nova-compute services on this host automatically.
When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, nova-consoleauth, or nova-osapi_compute.
Possible values:
True
: Each new compute service is enabled as soon as it registers itself.False
: Compute services must be enabled via an os-services REST API call
or with the CLI with nova service-enable <hostname> <binary>
, otherwise
they are not ready to use.instance_name_template
¶Type: | string |
---|---|
Default: | instance-%08x |
Template string to be used to generate instance names.
This template controls the creation of the database name of an instance. This
is not the display name you enter when creating an instance (via Horizon
or CLI). For a new deployment it is advisable to change the default value
(which uses the database autoincrement) to another value which makes use
of the attributes of an instance, like instance-%(uuid)s
. If you
already have instances in your deployment when you change this, your
deployment will break.
Possible values:
%(id)d
or %(uuid)s
or %(hostname)s
.Related options:
multi_instance_display_name_template
migrate_max_retries
¶Type: | integer |
---|---|
Default: | -1 |
Minimum Value: | -1 |
Number of times to retry live-migration before failing.
Possible values:
config_drive_format
¶Type: | string |
---|---|
Default: | iso9660 |
Valid Values: | iso9660, vfat |
Configuration drive format
Configuration drive format that will contain metadata attached to the instance when it boots.
Possible values:
Related options:
This option is meaningful when one of the following alternatives occur: 1. force_config_drive option set to 'true' 2. the REST API call to create the instance contains an enable flag for
config drive option
A compute node running Hyper-V hypervisor can be configured to attach configuration drive as a CD drive. To attach the configuration drive as a CD drive, set config_drive_cdrom option at hyperv section, to true.
force_config_drive
¶Type: | boolean |
---|---|
Default: | false |
Force injection to take place on a config drive
When this option is set to true configuration drive functionality will be forced enabled by default, otherwise user can still enable configuration drives via the REST API or image metadata properties.
Possible values:
Related options:
mkisofs_cmd
¶Type: | string |
---|---|
Default: | genisoimage |
Name or path of the tool used for ISO image creation
Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value.
To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.
Possible values:
Related options:
db_driver
¶Type: | string |
---|---|
Default: | nova.db |
The driver to use for database access
Warning
This option is deprecated for removal since 13.0.0. Its value may be silently ignored in the future.
default_flavor
¶Type: | string |
---|---|
Default: | m1.small |
Default flavor to use for the EC2 API only. The Nova API does not support a default flavor.
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | The EC2 API is deprecated. |
---|
my_ip
¶Type: | string |
---|---|
Default: | <host_ipv4> |
The IP address which the host is using to connect to the management network.
Possible values:
Related options:
my_block_storage_ip
¶Type: | string |
---|---|
Default: | $my_ip |
The IP address which is used to connect to the block storage network.
Possible values:
Related options:
host
¶Type: | string |
---|---|
Default: | <current_hostname> |
Hostname, FQDN or IP address of this host.
Used as:
Must be valid within AMQP key.
Possible values:
dhcpbridge_flagfile
¶Type: | multi-valued |
---|---|
Default: | /etc/nova/nova-dhcpbridge.conf |
This option is a list of full paths to one or more configuration files for dhcpbridge. In most cases the default path of '/etc/nova/nova-dhcpbridge.conf' should be sufficient, but if you have special needs for configuring dhcpbridge, you can change or add to this list.
Possible values
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
networks_path
¶Type: | string |
---|---|
Default: | $state_path/networks |
The location where the network configuration files will be kept. The default is the 'networks' directory off of the location where nova's Python module is installed.
Possible values
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
public_interface
¶Type: | string |
---|---|
Default: | eth0 |
This is the name of the network interface for public IP addresses. The default is 'eth0'.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dhcpbridge
¶Type: | string |
---|---|
Default: | $bindir/nova-dhcpbridge |
The location of the binary nova-dhcpbridge. By default it is the binary named 'nova-dhcpbridge' that is installed with all the other nova binaries.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
routing_source_ip
¶Type: | string |
---|---|
Default: | $my_ip |
The public IP address of the network host.
This is used when creating an SNAT rule.
Possible values:
Related options:
force_snat_range
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dhcp_lease_time
¶Type: | integer |
---|---|
Default: | 86400 |
Minimum Value: | 1 |
The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dns_server
¶Type: | multi-valued |
---|---|
Default: | '' |
Despite the singular form of the name of this option, it is actually a list of zero or more server addresses that dnsmasq will use for DNS nameservers. If this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use the servers specified in this option. If the option use_network_dns_servers is True, the dns1 and dns2 servers from the network will be appended to this list, and will be used as DNS servers, too.
Possible values:
Related options:
use_network_dns_servers
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
use_network_dns_servers
¶Type: | boolean |
---|---|
Default: | false |
When this option is set to True, the dns1 and dns2 servers for the network specified by the user on boot will be used for DNS, as well as any specified in the dns_server option.
Related options:
dns_server
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dmz_cidr
¶Type: | list |
---|---|
Default: | '' |
This option is a list of zero or more IP address ranges in your network's DMZ that should be accepted.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
force_snat_range
¶Type: | multi-valued |
---|---|
Default: | '' |
This is a list of zero or more IP ranges that traffic from the routing_source_ip will be SNATted to. If the list is empty, then no SNAT rules are created.
Possible values:
Related options:
routing_source_ip
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dnsmasq_config_file
¶Type: | string |
---|---|
Default: | '' |
The path to the custom dnsmasq configuration file, if any.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
linuxnet_interface_driver
¶Type: | string |
---|---|
Default: | nova.network.linux_net.LinuxBridgeInterfaceDriver |
This is the class used as the ethernet device driver for linuxnet bridge operations. The default value should be all you need for most cases, but if you wish to use a customized class, set this option to the full dot-separated import path for that class.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
linuxnet_ovs_integration_bridge
¶Type: | string |
---|---|
Default: | br-int |
The name of the Open vSwitch bridge that is used with linuxnet when connecting with Open vSwitch."
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
send_arp_for_ha
¶Type: | boolean |
---|---|
Default: | false |
When True, when a device starts up, and upon binding floating IP addresses, arp messages will be sent to ensure that the arp caches on the compute hosts are up-to-date.
Related options:
send_arp_for_ha_count
send_arp_for_ha_count
¶Type: | integer |
---|---|
Default: | 3 |
When arp messages are configured to be sent, they will be sent with the count set to the value of this option. Of course, if this is set to zero, no arp messages will be sent.
Possible values:
Related options:
send_arp_for_ha
use_single_default_gateway
¶Type: | boolean |
---|---|
Default: | false |
When set to True, only the firt nic of a VM will get its default gateway from the DHCP server.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
forward_bridge_interface
¶Type: | multi-valued |
---|---|
Default: | all |
One or more interfaces that bridges can forward traffic to. If any of the items in this list is the special keyword 'all', then all traffic will be forwarded.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
metadata_host
¶Type: | string |
---|---|
Default: | $my_ip |
This option determines the IP address for the network metadata API server.
This is really the client side of the metadata host equation that allows nova-network to find the metadata server when doing a default multi host networking.
Possible values:
Related options:
metadata_port
metadata_port
¶Type: | port number |
---|---|
Default: | 8775 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
This option determines the port used for the metadata API server.
Related options:
metadata_host
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
iptables_top_regex
¶Type: | string |
---|---|
Default: | '' |
This expression, if defined, will select any matching iptables rules and place them at the top when applying metadata changes to the rules.
Possible values:
Related options:
iptables_bottom_regex
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
iptables_bottom_regex
¶Type: | string |
---|---|
Default: | '' |
This expression, if defined, will select any matching iptables rules and place them at the bottom when applying metadata changes to the rules.
Possible values:
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
iptables_drop_action
¶Type: | string |
---|---|
Default: | DROP |
By default, packets that do not pass the firewall are DROPped. In many cases, though, an operator may find it more useful to change this from DROP to REJECT, so that the user issuing those packets may have a better idea as to what's going on, or LOGDROP in order to record the blocked traffic before DROPping.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ovs_vsctl_timeout
¶Type: | integer |
---|---|
Default: | 120 |
Minimum Value: | 0 |
This option represents the period of time, in seconds, that the ovs_vsctl calls will wait for a response from the database before timing out. A setting of 0 means that the utility should wait forever for a response.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
fake_network
¶Type: | boolean |
---|---|
Default: | false |
This option is used mainly in testing to avoid calls to the underlying network utilities.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ebtables_exec_attempts
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 1 |
This option determines the number of times to retry ebtables commands before giving up. The minimum number of retries is 1.
Possible values:
Related options:
ebtables_retry_interval
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ebtables_retry_interval
¶Type: | floating point |
---|---|
Default: | 1.0 |
This option determines the time, in seconds, that the system will sleep in between ebtables retries. Note that each successive retry waits a multiple of this value, so for example, if this is set to the default of 1.0 seconds, and ebtables_exec_attempts is 4, after the first failure, the system will sleep for 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and after the third failure it will sleep 3 * 1.0 seconds.
Possible values:
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
use_neutron
¶Type: | boolean |
---|---|
Default: | true |
Enable neutron as the backend for networking.
Determine whether to use Neutron or Nova Network as the back end. Set to true to use neutron.
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
flat_injected
¶Type: | boolean |
---|---|
Default: | false |
This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM.
flat_network_bridge
¶Type: | string |
---|---|
Default: | <None> |
This option determines the bridge used for simple network interfaces when no bridge is specified in the VM creation request.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
flat_network_dns
¶Type: | string |
---|---|
Default: | 8.8.4.4 |
This is the address of the DNS server for a simple network. If this option is not specified, the default of '8.8.4.4' is used.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
flat_interface
¶Type: | string |
---|---|
Default: | <None> |
This option is the name of the virtual interface of the VM on which the bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt for the bridge interface name.
Possible values:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
vlan_start
¶Type: | integer |
---|---|
Default: | 100 |
Minimum Value: | 1 |
Maximum Value: | 4094 |
This is the VLAN number used for private networks. Note that the when creating the networks, if the specified number has already been assigned, nova-network will increment this number until it finds an available VLAN.
Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of 'nova.network.manager.VlanManager'.
Possible values:
Related options:
network_manager
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
vlan_interface
¶Type: | string |
---|---|
Default: | <None> |
This option is the name of the virtual interface of the VM on which the VLAN bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt and xenapi for the bridge interface name.
Please note that this setting will be ignored in nova-network if the configuration option for network_manager is not set to the default of 'nova.network.manager.VlanManager'.
Possible values:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. While this option has an effect when using neutron, it incorrectly override the value provided by neutron and should therefore not be used. |
---|
num_networks
¶Type: | integer |
---|---|
Default: | 1 |
Minimum Value: | 1 |
This option represents the number of networks to create if not explicitly specified when the network is created. The only time this is used is if a CIDR is specified, but an explicit network_size is not. In that case, the subnets are created by diving the IP address space of the CIDR by num_networks. The resulting subnet sizes cannot be larger than the configuration option network_size; in that event, they are reduced to network_size, and a warning is logged.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
network_size
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
vpn_ip
¶Type: | string |
---|---|
Default: | $my_ip |
This option is no longer used since the /os-cloudpipe API was removed in the 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN servers. It defaults to the IP address of the host.
Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of 'nova.network.manager.VlanManager'.
Possible values:
$my_ip
, the IP address of the VM.Related options:
network_manager
use_neutron
vpn_start
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
vpn_start
¶Type: | port number |
---|---|
Default: | 1000 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
This is the port number to use as the first VPN port for private networks.
Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of 'nova.network.manager.VlanManager', or if you specify a value the 'vpn_start' parameter when creating a network.
Possible values:
Related options:
use_neutron
vpn_ip
network_manager
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
network_size
¶Type: | integer |
---|---|
Default: | 256 |
Minimum Value: | 1 |
This option determines the number of addresses in each private subnet.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
num_networks
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
fixed_range_v6
¶Type: | string |
---|---|
Default: | fd00::/48 |
This option determines the fixed IPv6 address block when creating a network.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
gateway
¶Type: | string |
---|---|
Default: | <None> |
This is the default IPv4 gateway. It is used only in the testing suite.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
gateway_v6
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
gateway_v6
¶Type: | string |
---|---|
Default: | <None> |
This is the default IPv6 gateway. It is used only in the testing suite.
Please note that this option is only used when using nova-network instead of Neutron in your deployment.
Possible values:
Related options:
use_neutron
gateway
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
cnt_vpn_clients
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
This option represents the number of IP addresses to reserve at the top of the address range for VPN clients. It also will be ignored if the configuration option for network_manager is not set to the default of 'nova.network.manager.VlanManager'.
Possible values:
Related options:
use_neutron
network_manager
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
fixed_ip_disassociate_timeout
¶Type: | integer |
---|---|
Default: | 600 |
Minimum Value: | 0 |
This is the number of seconds to wait before disassociating a deallocated fixed IP address. This is only used with the nova-network service, and has no effect when using neutron for networking.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
create_unique_mac_address_attempts
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 1 |
This option determines how many times nova-network will attempt to create a unique MAC address before giving up and raising a VirtualInterfaceMacAddressException error.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
teardown_unused_network_gateway
¶Type: | boolean |
---|---|
Default: | false |
Determines whether unused gateway devices, both VLAN and bridge, are deleted if the network is in nova-network VLAN mode and is multi-hosted.
Related options:
use_neutron
vpn_ip
fake_network
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
force_dhcp_release
¶Type: | boolean |
---|---|
Default: | true |
When this option is True, a call is made to release the DHCP for the instance when that instance is terminated.
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
update_dns_entries
¶Type: | boolean |
---|---|
Default: | false |
When this option is True, whenever a DNS entry must be updated, a fanout cast message is sent to all network hosts to update their DNS entries in multi-host mode.
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dns_update_periodic_interval
¶Type: | integer |
---|---|
Default: | -1 |
Minimum Value: | -1 |
This option determines the time, in seconds, to wait between refreshing DNS entries for the network.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
dhcp_domain
¶Type: | string |
---|---|
Default: | novalocal |
This option allows you to specify the domain for the DHCP server.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
l3_lib
¶Type: | string |
---|---|
Default: | nova.network.l3.LinuxNetL3 |
This option allows you to specify the L3 management library to be used.
Possible values:
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
Type: | boolean |
---|---|
Default: | false |
THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.
If True in multi_host mode, all compute hosts share the same dhcp address. The same IP address used for DHCP will be added on each nova-network node which is only visible to the VMs on the same host.
The use of this configuration has been deprecated and may be removed in any release after Mitaka. It is recommended that instead of relying on this option, an explicit value should be passed to 'create_networks()' as a keyword argument with the name 'share_address'.
Warning
This option is deprecated for removal since 2014.2. Its value may be silently ignored in the future.
ldap_dns_url
¶Type: | URI |
---|---|
Default: | ldap://ldap.example.com:389 |
URL for LDAP server which will store DNS entries
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_user
¶Type: | string |
---|---|
Default: | uid=admin,ou=people,dc=example,dc=org |
Bind user for LDAP server
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_password
¶Type: | string |
---|---|
Default: | password |
Bind user's password for LDAP server
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_soa_hostmaster
¶Type: | string |
---|---|
Default: | hostmaster@example.org |
Hostmaster for LDAP DNS driver Statement of Authority
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_servers
¶Type: | multi-valued |
---|---|
Default: | dns.example.org |
DNS Servers for LDAP DNS driver
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_base_dn
¶Type: | string |
---|---|
Default: | ou=hosts,dc=example,dc=org |
Base distinguished name for the LDAP search query
This option helps to decide where to look up the host in LDAP.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_soa_refresh
¶Type: | integer |
---|---|
Default: | 1800 |
Refresh interval (in seconds) for LDAP DNS driver Start of Authority
Time interval, a secondary/slave DNS server waits before requesting for primary DNS server's current SOA record. If the records are different, secondary DNS server will request a zone transfer from primary.
NOTE: Lower values would cause more traffic.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_soa_retry
¶Type: | integer |
---|---|
Default: | 3600 |
Retry interval (in seconds) for LDAP DNS driver Start of Authority
Time interval, a secondary/slave DNS server should wait, if an attempt to transfer zone failed during the previous refresh interval.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_soa_expiry
¶Type: | integer |
---|---|
Default: | 86400 |
Expiry interval (in seconds) for LDAP DNS driver Start of Authority
Time interval, a secondary/slave DNS server holds the information before it is no longer considered authoritative.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ldap_dns_soa_minimum
¶Type: | integer |
---|---|
Default: | 7200 |
Minimum interval (in seconds) for LDAP DNS driver Start of Authority
It is Minimum time-to-live applies for all resource records in the zone file. This value is supplied to other servers how long they should keep the data in cache.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
multi_host
¶Type: | boolean |
---|---|
Default: | false |
Default value for multi_host in networks.
nova-network service can operate in a multi-host or single-host mode. In multi-host mode each compute node runs a copy of nova-network and the instances on that compute node use the compute node as a gateway to the Internet. Where as in single-host mode, a central server runs the nova-network service. All compute nodes forward traffic from the instances to the cloud controller which then forwards traffic to the Internet.
If this options is set to true, some rpc network calls will be sent directly to host.
Note that this option is only used when using nova-network instead of Neutron in your deployment.
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
network_driver
¶Type: | string |
---|---|
Default: | nova.network.linux_net |
Driver to use for network creation.
Network driver initializes (creates bridges and so on) only when the first VM lands on a host node. All network managers configure the network using network drivers. The driver is not tied to any particular network manager.
The default Linux driver implements vlans, bridges, and iptables rules using linux utilities.
Note that this option is only used when using nova-network instead of Neutron in your deployment.
Related options:
use_neutron
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
firewall_driver
¶Type: | string |
---|---|
Default: | nova.virt.firewall.NoopFirewallDriver |
Firewall driver to use with nova-network
service.
This option only applies when using the nova-network
service. When using
another networking services, such as Neutron, this should be to set to the
nova.virt.firewall.NoopFirewallDriver
.
Possible values:
nova.virt.firewall.IptablesFirewallDriver
nova.virt.firewall.NoopFirewallDriver
nova.virt.libvirt.firewall.IptablesFirewallDriver
Related options:
use_neutron
: This must be set to False
to enable nova-network
networkingWarning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
allow_same_net_traffic
¶Type: | boolean |
---|---|
Default: | true |
Determine whether to allow network traffic from same network.
When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project.
This option only applies when using the nova-network
service. When using
another networking services, such as Neutron, security groups or other
approaches should be used.
Possible values:
Related options:
use_neutron
: This must be set to False
to enable nova-network
networkingfirewall_driver
: This must be set to
nova.virt.libvirt.firewall.IptablesFirewallDriver
to ensure the
libvirt firewall driver is enabled.Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
default_floating_pool
¶Type: | string |
---|---|
Default: | nova |
Default pool for floating IPs.
This option specifies the default floating IP pool for allocating floating IPs.
While allocating a floating ip, users can optionally pass in the name of the pool they want to allocate from, otherwise it will be pulled from the default pool.
If this option is not set, then 'nova' is used as default floating pool.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | This option was used for two purposes: to set the floating IP pool name for nova-network and to do the same for neutron. nova-network is deprecated, as are any related configuration options. Users of neutron, meanwhile, should use the 'default_floating_pool' option in the '[neutron]' group. |
---|
auto_assign_floating_ip
¶Type: | boolean |
---|---|
Default: | false |
Autoassigning floating IP to VM
When set to True, floating IP is auto allocated and associated to the VM upon creation.
Related options:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
floating_ip_dns_manager
¶Type: | string |
---|---|
Default: | nova.network.noop_dns_driver.NoopDNSDriver |
Full class name for the DNS Manager for floating IPs.
This option specifies the class of the driver that provides functionality to manage DNS entries associated with floating IPs.
When a user adds a DNS entry for a specified domain to a floating IP, nova will add a DNS entry using the specified floating DNS driver. When a floating IP is deallocated, its DNS entry will automatically be deleted.
Possible values:
Related options:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
instance_dns_manager
¶Type: | string |
---|---|
Default: | nova.network.noop_dns_driver.NoopDNSDriver |
Full class name for the DNS Manager for instance IPs.
This option specifies the class of the driver that provides functionality to manage DNS entries for instances.
On instance creation, nova will add DNS entries for the instance name and id, using the specified instance DNS driver and domain. On instance deletion, nova will remove the DNS entries.
Possible values:
Related options:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
instance_dns_domain
¶Type: | string |
---|---|
Default: | '' |
If specified, Nova checks if the availability_zone of every instance matches what the database says the availability_zone should be for the specified dns_domain.
Related options:
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
use_ipv6
¶Type: | boolean |
---|---|
Default: | false |
Assign IPv6 and IPv4 addresses when creating instances.
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
ipv6_backend
¶Type: | string |
---|---|
Default: | rfc2462 |
Valid Values: | rfc2462, account_identifier |
Abstracts out IPv6 address generation to pluggable backends.
nova-network can be put into dual-stack mode, so that it uses both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances acquire IPv6 global unicast addresses with the help of stateless address auto-configuration mechanism.
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
enable_network_quota
¶Type: | boolean |
---|---|
Default: | false |
This option is used to enable or disable quota checking for tenant networks.
Related options:
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated. |
---|
quota_networks
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 0 |
This option controls the number of private networks that can be created per project (or per tenant).
Related options:
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated. |
---|
record
¶Type: | string |
---|---|
Default: | <None> |
Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done.
daemon
¶Type: | boolean |
---|---|
Default: | false |
Run as a background process.
ssl_only
¶Type: | boolean |
---|---|
Default: | false |
Disallow non-encrypted connections.
source_is_ipv6
¶Type: | boolean |
---|---|
Default: | false |
Set to True if source host is addressed with IPv6.
cert
¶Type: | string |
---|---|
Default: | self.pem |
Path to SSL certificate file.
key
¶Type: | string |
---|---|
Default: | <None> |
SSL key file (if separate from cert).
web
¶Type: | string |
---|---|
Default: | /usr/share/spice-html5 |
Path to directory with content which will be served by a web server.
pybasedir
¶Type: | string |
---|---|
Default: | /home/zuul/.venv/lib/python3.5/site-packages |
The directory where the Nova python modules are installed.
This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value.
Possible values:
Related options:
state_path
bindir
¶Type: | string |
---|---|
Default: | /home/zuul/.venv/local/bin |
The directory where the Nova binaries are installed.
This option is only relevant if the networking capabilities from Nova are used (see services below). Nova's networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value.
Possible values:
state_path
¶Type: | string |
---|---|
Default: | $pybasedir |
The top-level directory for maintaining Nova's state.
This directory is used to store Nova's internal state. It is used by a
variety of other config options which derive from this. In some scenarios
(for example migrations) it makes sense to use a storage location which is
shared between multiple compute hosts (for example via NFS). Unless the
option instances_path
gets overwritten, this directory can grow very
large.
Possible values:
pybasedir
.report_interval
¶Type: | integer |
---|---|
Default: | 10 |
Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment.
Related Options:
service_down_time
¶Type: | integer |
---|---|
Default: | 60 |
Maximum time in seconds since last check-in for up service
Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn't updated the status for more than service_down_time, then the compute node is considered down.
Related Options:
periodic_enable
¶Type: | boolean |
---|---|
Default: | true |
Enable periodic tasks.
If set to true, this option allows services to periodically run tasks on the manager.
In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one.
periodic_fuzzy_delay
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding.
When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler.
Possible Values:
enabled_apis
¶Type: | list |
---|---|
Default: | osapi_compute,metadata |
List of APIs to be enabled by default.
enabled_ssl_apis
¶Type: | list |
---|---|
Default: | '' |
List of APIs with enabled SSL.
Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support.
osapi_compute_listen
¶Type: | string |
---|---|
Default: | 0.0.0.0 |
IP address on which the OpenStack API will listen.
The OpenStack API service listens on this IP address for incoming requests.
osapi_compute_listen_port
¶Type: | port number |
---|---|
Default: | 8774 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port on which the OpenStack API will listen.
The OpenStack API service listens on this port number for incoming requests.
osapi_compute_workers
¶Type: | integer |
---|---|
Default: | <None> |
Minimum Value: | 1 |
Number of workers for OpenStack API service. The default will be the number of CPUs available.
OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes.
Possible Values:
metadata_listen
¶Type: | string |
---|---|
Default: | 0.0.0.0 |
IP address on which the metadata API will listen.
The metadata API service listens on this IP address for incoming requests.
metadata_listen_port
¶Type: | port number |
---|---|
Default: | 8775 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port on which the metadata API will listen.
The metadata API service listens on this port number for incoming requests.
metadata_workers
¶Type: | integer |
---|---|
Default: | <None> |
Minimum Value: | 1 |
Number of workers for metadata service. If not specified the number of available CPUs will be used.
The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes.
Possible Values:
network_manager
¶Type: | string |
---|---|
Default: | nova.network.manager.VlanManager |
Valid Values: | nova.network.manager.FlatManager, nova.network.manager.FlatDHCPManager, nova.network.manager.VlanManager |
Full class name for the Manager for network
servicegroup_driver
¶Type: | string |
---|---|
Default: | db |
Valid Values: | db, mc |
This option specifies the driver to be used for the servicegroup service.
ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver.
Possible Values:
- db : Database ServiceGroup driver
- mc : Memcache ServiceGroup driver
Related Options:
- service_down_time (maximum time since last check-in for up service)
backdoor_port
¶Type: | string |
---|---|
Default: | <None> |
Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file.
backdoor_socket
¶Type: | string |
---|---|
Default: | <None> |
Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with 'backdoor_port' in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option.
log_options
¶Type: | boolean |
---|---|
Default: | true |
Enables or disables logging values of all registered options when starting a service (at DEBUG level).
graceful_shutdown_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait.
run_external_periodic_tasks
¶Type: | boolean |
---|---|
Default: | true |
Some periodic tasks can be run in a separate process. Should we run them here?
debug
¶Type: | boolean |
---|---|
Default: | false |
Mutable: | This option can be changed without restarting. |
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
¶Type: | string |
---|---|
Default: | <None> |
Mutable: | This option can be changed without restarting. |
The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string).
Group | Name |
---|---|
DEFAULT | log-config |
DEFAULT | log_config |
log_date_format
¶Type: | string |
---|---|
Default: | %Y-%m-%d %H:%M:%S |
Defines the format string for %(asctime)s in log records. Default: the value above . This option is ignored if log_config_append is set.
log_file
¶Type: | string |
---|---|
Default: | <None> |
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.
Group | Name |
---|---|
DEFAULT | logfile |
log_dir
¶Type: | string |
---|---|
Default: | <None> |
(Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.
Group | Name |
---|---|
DEFAULT | logdir |
watch_log_file
¶Type: | boolean |
---|---|
Default: | false |
Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
¶Type: | boolean |
---|---|
Default: | false |
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
use_journal
¶Type: | boolean |
---|---|
Default: | false |
Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
¶Type: | string |
---|---|
Default: | LOG_USER |
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
¶Type: | boolean |
---|---|
Default: | false |
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
¶Type: | boolean |
---|---|
Default: | false |
Log output to standard error. This option is ignored if log_config_append is set.
logging_context_format_string
¶Type: | string |
---|---|
Default: | %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s |
Format string to use for log messages with context.
logging_default_format_string
¶Type: | string |
---|---|
Default: | %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s |
Format string to use for log messages when context is undefined.
logging_debug_format_suffix
¶Type: | string |
---|---|
Default: | %(funcName)s %(pathname)s:%(lineno)d |
Additional data to append to log message when logging level for the message is DEBUG.
logging_exception_prefix
¶Type: | string |
---|---|
Default: | %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s |
Prefix each line of exception output with this format.
logging_user_identity_format
¶Type: | string |
---|---|
Default: | %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s |
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
default_log_levels
¶Type: | list |
---|---|
Default: | amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO |
List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
publish_errors
¶Type: | boolean |
---|---|
Default: | false |
Enables or disables publication of error events.
instance_format
¶Type: | string |
---|---|
Default: | "[instance: %(uuid)s] " |
The format for an instance that is passed with the log message.
instance_uuid_format
¶Type: | string |
---|---|
Default: | "[instance: %(uuid)s] " |
The format for an instance UUID that is passed with the log message.
rate_limit_interval
¶Type: | integer |
---|---|
Default: | 0 |
Interval, number of seconds, of log rate limiting.
rate_limit_burst
¶Type: | integer |
---|---|
Default: | 0 |
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
¶Type: | string |
---|---|
Default: | CRITICAL |
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered.
fatal_deprecations
¶Type: | boolean |
---|---|
Default: | false |
Enables or disables fatal status of deprecations.
rpc_conn_pool_size
¶Type: | integer |
---|---|
Default: | 30 |
Size of RPC connection pool.
Group | Name |
---|---|
DEFAULT | rpc_conn_pool_size |
conn_pool_min_size
¶Type: | integer |
---|---|
Default: | 2 |
The pool size limit for connections expiration policy
conn_pool_ttl
¶Type: | integer |
---|---|
Default: | 1200 |
The time-to-live in sec of idle connections in the pool
rpc_zmq_bind_address
¶Type: | string |
---|---|
Default: | * |
ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The "host" option should point or resolve to this address.
Group | Name |
---|---|
DEFAULT | rpc_zmq_bind_address |
rpc_zmq_matchmaker
¶Type: | string |
---|---|
Default: | redis |
Valid Values: | redis, sentinel, dummy |
MatchMaker driver.
Group | Name |
---|---|
DEFAULT | rpc_zmq_matchmaker |
rpc_zmq_contexts
¶Type: | integer |
---|---|
Default: | 1 |
Number of ZeroMQ contexts, defaults to 1.
Group | Name |
---|---|
DEFAULT | rpc_zmq_contexts |
rpc_zmq_topic_backlog
¶Type: | integer |
---|---|
Default: | <None> |
Maximum number of ingress messages to locally buffer per topic. Default is unlimited.
Group | Name |
---|---|
DEFAULT | rpc_zmq_topic_backlog |
rpc_zmq_ipc_dir
¶Type: | string |
---|---|
Default: | /var/run/openstack |
Directory for holding IPC sockets.
Group | Name |
---|---|
DEFAULT | rpc_zmq_ipc_dir |
rpc_zmq_host
¶Type: | string |
---|---|
Default: | localhost |
Name of this node. Must be a valid hostname, FQDN, or IP address. Must match "host" option, if running Nova.
Group | Name |
---|---|
DEFAULT | rpc_zmq_host |
zmq_linger
¶Type: | integer |
---|---|
Default: | -1 |
Number of seconds to wait before all pending messages will be sent after closing a socket. The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Positive values specify an upper bound for the linger period.
Group | Name |
---|---|
DEFAULT | rpc_cast_timeout |
rpc_poll_timeout
¶Type: | integer |
---|---|
Default: | 1 |
The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
Group | Name |
---|---|
DEFAULT | rpc_poll_timeout |
zmq_target_expire
¶Type: | integer |
---|---|
Default: | 300 |
Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout).
Group | Name |
---|---|
DEFAULT | zmq_target_expire |
zmq_target_update
¶Type: | integer |
---|---|
Default: | 180 |
Update period in seconds of a name service record about existing target.
Group | Name |
---|---|
DEFAULT | zmq_target_update |
use_pub_sub
¶Type: | boolean |
---|---|
Default: | false |
Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
Group | Name |
---|---|
DEFAULT | use_pub_sub |
use_router_proxy
¶Type: | boolean |
---|---|
Default: | false |
Use ROUTER remote proxy.
Group | Name |
---|---|
DEFAULT | use_router_proxy |
use_dynamic_connections
¶Type: | boolean |
---|---|
Default: | false |
This option makes direct connections dynamic or static. It makes sense only with use_router_proxy=False which means to use direct connections for direct message types (ignored otherwise).
zmq_failover_connections
¶Type: | integer |
---|---|
Default: | 2 |
How many additional connections to a host will be made for failover reasons. This option is actual only in dynamic connections mode.
rpc_zmq_min_port
¶Type: | port number |
---|---|
Default: | 49153 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Minimal port number for random ports range.
Group | Name |
---|---|
DEFAULT | rpc_zmq_min_port |
rpc_zmq_max_port
¶Type: | integer |
---|---|
Default: | 65536 |
Minimum Value: | 1 |
Maximum Value: | 65536 |
Maximal port number for random ports range.
Group | Name |
---|---|
DEFAULT | rpc_zmq_max_port |
rpc_zmq_bind_port_retries
¶Type: | integer |
---|---|
Default: | 100 |
Number of retries to find free port number before fail with ZMQBindError.
Group | Name |
---|---|
DEFAULT | rpc_zmq_bind_port_retries |
rpc_zmq_serialization
¶Type: | string |
---|---|
Default: | json |
Valid Values: | json, msgpack |
Default serialization mechanism for serializing/deserializing outgoing/incoming messages
Group | Name |
---|---|
DEFAULT | rpc_zmq_serialization |
zmq_immediate
¶Type: | boolean |
---|---|
Default: | true |
This option configures round-robin mode in zmq socket. True means not keeping a queue when server side disconnects. False means to keep queue and messages even if server is disconnected, when the server appears we send all accumulated messages to it.
zmq_tcp_keepalive
¶Type: | integer |
---|---|
Default: | -1 |
Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any other negative value) means to skip any overrides and leave it to OS default; 0 and 1 (or any other positive value) mean to disable and enable the option respectively.
zmq_tcp_keepalive_idle
¶Type: | integer |
---|---|
Default: | -1 |
The duration between two keepalive transmissions in idle condition. The unit is platform dependent, for example, seconds in Linux, milliseconds in Windows etc. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
zmq_tcp_keepalive_cnt
¶Type: | integer |
---|---|
Default: | -1 |
The number of retransmissions to be carried out before declaring that remote end is not available. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
zmq_tcp_keepalive_intvl
¶Type: | integer |
---|---|
Default: | -1 |
The duration between two successive keepalive retransmissions, if acknowledgement to the previous keepalive transmission is not received. The unit is platform dependent, for example, seconds in Linux, milliseconds in Windows etc. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
rpc_thread_pool_size
¶Type: | integer |
---|---|
Default: | 100 |
Maximum number of (green) threads to work concurrently.
rpc_message_ttl
¶Type: | integer |
---|---|
Default: | 300 |
Expiration timeout in seconds of a sent/received message after which it is not tracked anymore by a client/server.
rpc_use_acks
¶Type: | boolean |
---|---|
Default: | false |
Wait for message acknowledgements from receivers. This mechanism works only via proxy without PUB/SUB.
rpc_ack_timeout_base
¶Type: | integer |
---|---|
Default: | 15 |
Number of seconds to wait for an ack from a cast/call. After each retry attempt this timeout is multiplied by some specified multiplier.
rpc_ack_timeout_multiplier
¶Type: | integer |
---|---|
Default: | 2 |
Number to multiply base ack timeout by after each retry attempt.
rpc_retry_attempts
¶Type: | integer |
---|---|
Default: | 3 |
Default number of message sending attempts in case of any problems occurred: positive value N means at most N retries, 0 means no retries, None or -1 (or any other negative values) mean to retry forever. This option is used only if acknowledgments are enabled.
subscribe_on
¶Type: | list |
---|---|
Default: | '' |
List of publisher hosts SubConsumer can subscribe on. This option has higher priority then the default publishers list taken from the matchmaker.
executor_thread_pool_size
¶Type: | integer |
---|---|
Default: | 64 |
Size of executor thread pool when executor is threading or eventlet.
Group | Name |
---|---|
DEFAULT | rpc_thread_pool_size |
rpc_response_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Seconds to wait for a response from a call.
transport_url
¶Type: | string |
---|---|
Default: | <None> |
The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is:
driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
Example: rabbit://rabbitmq:password@127.0.0.1:5672//
For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
rpc_backend
¶Type: | string |
---|---|
Default: | rabbit |
The messaging driver to use, defaults to rabbit. Other drivers include amqp and zmq.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
control_exchange
¶Type: | string |
---|---|
Default: | openstack |
The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
Options under this group are used to define Nova API.
auth_strategy
¶Type: | string |
---|---|
Default: | keystone |
Valid Values: | keystone, noauth2 |
This determines the strategy to use for authentication: keystone or noauth2. 'noauth2' is designed for testing only, as it does no actual credential checking. 'noauth2' provides administrative credentials only if 'admin' is specified as the username.
Group | Name |
---|---|
DEFAULT | auth_strategy |
use_forwarded_for
¶Type: | boolean |
---|---|
Default: | false |
When True, the 'X-Forwarded-For' header is treated as the canonical remote address. When False (the default), the 'remote_address' header is used.
You should only enable this if you have an HTML sanitizing proxy.
Group | Name |
---|---|
DEFAULT | use_forwarded_for |
config_drive_skip_versions
¶Type: | string |
---|---|
Default: | 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 |
When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don't appear in this option. As of the Liberty release, the available versions are:
The option is in the format of a single string, with each version separated by a space.
Possible values:
Group | Name |
---|---|
DEFAULT | config_drive_skip_versions |
vendordata_providers
¶Type: | list |
---|---|
Default: | StaticJSON |
A list of vendordata providers.
vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. There are currently two supported providers: StaticJSON and DynamicJSON.
StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path and places the JSON from that file into vendor_data.json and vendor_data2.json.
DynamicJSON is configured via the vendordata_dynamic_targets flag, which is documented separately. For each of the endpoints specified in that flag, a section is added to the vendor_data2.json.
For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | vendordata_providers |
vendordata_dynamic_targets
¶Type: | list |
---|---|
Default: | '' |
A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>.
The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference.
Group | Name |
---|---|
DEFAULT | vendordata_dynamic_targets |
vendordata_dynamic_ssl_certfile
¶Type: | string |
---|---|
Default: | '' |
Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | vendordata_dynamic_ssl_certfile |
vendordata_dynamic_connect_timeout
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 3 |
Maximum wait time for an external REST service to connect.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | vendordata_dynamic_connect_timeout |
vendordata_dynamic_read_timeout
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 0 |
Maximum wait time for an external REST service to return data once connected.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | vendordata_dynamic_read_timeout |
vendordata_dynamic_failure_fatal
¶Type: | boolean |
---|---|
Default: | false |
Should failures to fetch dynamic vendordata be fatal to instance boot?
Related options:
metadata_cache_expiration
¶Type: | integer |
---|---|
Default: | 15 |
Minimum Value: | 0 |
This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect.
Group | Name |
---|---|
DEFAULT | metadata_cache_expiration |
vendordata_jsonfile_path
¶Type: | string |
---|---|
Default: | <None> |
Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary.
Possible values:
Group | Name |
---|---|
DEFAULT | vendordata_jsonfile_path |
max_limit
¶Type: | integer |
---|---|
Default: | 1000 |
Minimum Value: | 0 |
As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.
Group | Name |
---|---|
DEFAULT | osapi_max_limit |
compute_link_prefix
¶Type: | string |
---|---|
Default: | <None> |
This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged.
Possible values:
Group | Name |
---|---|
DEFAULT | osapi_compute_link_prefix |
glance_link_prefix
¶Type: | string |
---|---|
Default: | <None> |
This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged.
Possible values:
Group | Name |
---|---|
DEFAULT | osapi_glance_link_prefix |
allow_instance_snapshots
¶Type: | boolean |
---|---|
Default: | true |
Operators can turn off the ability for a user to take snapshots of their instances by setting this option to False. When disabled, any attempt to take a snapshot will result in a HTTP 400 response ("Bad Request").
Group | Name |
---|---|
DEFAULT | allow_instance_snapshots |
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | This option disables the createImage server action API in a non-discoverable way and is thus a barrier to interoperability. Also, it is not used for other APIs that create snapshots like shelve or createBackup. Disabling snapshots should be done via policy if so desired. |
---|
hide_server_address_states
¶Type: | list |
---|---|
Default: | building |
This option is a list of all instance states for which network address information should not be returned from the API.
Possible values:
A list of strings, where each string is a valid VM state, as defined in nova/compute/vm_states.py. As of the Newton release, they are:
Group | Name |
---|---|
DEFAULT | osapi_hide_server_address_states |
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | This option hide the server address in server representation for configured server states. Which makes GET server API controlled by this config options. Due to this config options, user would not be able to discover the API behavior on different clouds which leads to the interop issue. |
---|
fping_path
¶Type: | string |
---|---|
Default: | /usr/sbin/fping |
The full path to the fping binary.
Group | Name |
---|---|
DEFAULT | fping_path |
use_neutron_default_nets
¶Type: | boolean |
---|---|
Default: | false |
When True, the TenantNetworkController will query the Neutron API to get the default networks to use.
Related options:
Group | Name |
---|---|
DEFAULT | use_neutron_default_nets |
neutron_default_tenant_id
¶Type: | string |
---|---|
Default: | default |
Tenant ID for getting the default network from Neutron API (also referred in some places as the 'project ID') to use.
Related options:
Group | Name |
---|---|
DEFAULT | neutron_default_tenant_id |
enable_instance_password
¶Type: | boolean |
---|---|
Default: | true |
Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False.
Group | Name |
---|---|
DEFAULT | enable_instance_password |
The Nova API Database is a separate database which is used for information which is used across cells. This database is mandatory since the Mitaka release (13.0.0).
connection
¶Type: | string |
---|---|
Default: | <None> |
The SQLAlchemy connection string to use to connect to the database.The SQLAlchemy connection string to use to connect to the database.
sqlite_synchronous
¶Type: | boolean |
---|---|
Default: | true |
If True, SQLite uses synchronous mode.If True, SQLite uses synchronous mode.
slave_connection
¶Type: | string |
---|---|
Default: | <None> |
The SQLAlchemy connection string to use to connect to the slave database.The SQLAlchemy connection string to use to connect to the slave database.
mysql_sql_mode
¶Type: | string |
---|---|
Default: | TRADITIONAL |
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
connection_recycle_time
¶Type: | integer |
---|---|
Default: | 3600 |
Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.
Group | Name |
---|---|
api_database | idle_timeout |
max_pool_size
¶Type: | integer |
---|---|
Default: | <None> |
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.
max_retries
¶Type: | integer |
---|---|
Default: | 10 |
Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
retry_interval
¶Type: | integer |
---|---|
Default: | 10 |
Interval between retries of opening a SQL connection.Interval between retries of opening a SQL connection.
max_overflow
¶Type: | integer |
---|---|
Default: | <None> |
If set, use this value for max_overflow with SQLAlchemy.If set, use this value for max_overflow with SQLAlchemy.
connection_debug
¶Type: | integer |
---|---|
Default: | 0 |
Verbosity of SQL debugging information: 0=None, 100=Everything.Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace
¶Type: | boolean |
---|---|
Default: | false |
Add Python stack traces to SQL as comment strings.Add Python stack traces to SQL as comment strings.
pool_timeout
¶Type: | integer |
---|---|
Default: | <None> |
If set, use this value for pool_timeout with SQLAlchemy.If set, use this value for pool_timeout with SQLAlchemy.
barbican_endpoint
¶Type: | string |
---|---|
Default: | <None> |
Use this endpoint to connect to Barbican, for example: "http://localhost:9311/"
barbican_api_version
¶Type: | string |
---|---|
Default: | <None> |
Version of the Barbican API, for example: "v1"
auth_endpoint
¶Type: | string |
---|---|
Default: | http://localhost/identity/v3 |
Use this endpoint to connect to Keystone
Group | Name |
---|---|
key_manager | auth_url |
retry_delay
¶Type: | integer |
---|---|
Default: | 1 |
Number of seconds to wait before retrying poll for key creation completion
number_of_retries
¶Type: | integer |
---|---|
Default: | 60 |
Number of times to retry poll for key creation completion
verify_ssl
¶Type: | boolean |
---|---|
Default: | true |
Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated
config_prefix
¶Type: | string |
---|---|
Default: | cache.oslo |
Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.
expiration_time
¶Type: | integer |
---|---|
Default: | 600 |
Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it.
backend
¶Type: | string |
---|---|
Default: | dogpile.cache.null |
Valid Values: | oslo_cache.memcache_pool, oslo_cache.dict, oslo_cache.mongo, oslo_cache.etcd3gw, dogpile.cache.memcached, dogpile.cache.pylibmc, dogpile.cache.bmemcached, dogpile.cache.dbm, dogpile.cache.redis, dogpile.cache.memory, dogpile.cache.memory_pickle, dogpile.cache.null |
Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend.
backend_argument
¶Type: | multi-valued |
---|---|
Default: | '' |
Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>".
proxies
¶Type: | list |
---|---|
Default: | '' |
Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.
enabled
¶Type: | boolean |
---|---|
Default: | false |
Global toggle for caching.
debug_cache_backend
¶Type: | boolean |
---|---|
Default: | false |
Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.
memcache_servers
¶Type: | list |
---|---|
Default: | localhost:11211 |
Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
memcache_dead_retry
¶Type: | integer |
---|---|
Default: | 300 |
Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
memcache_socket_timeout
¶Type: | integer |
---|---|
Default: | 3 |
Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
memcache_pool_maxsize
¶Type: | integer |
---|---|
Default: | 10 |
Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).
memcache_pool_unused_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).
memcache_pool_connection_get_timeout
¶Type: | integer |
---|---|
Default: | 10 |
Number of seconds that an operation will wait to get a memcache client connection.
DEPRECATED: Cells options allow you to use cells v1 functionality in an OpenStack deployment. Note that the options in this group are only for cells v1 functionality, which is considered experimental and not recommended for new deployments. Cells v1 is being replaced with cells v2, which starting in the 15.0.0 Ocata release is required and all Nova deployments will be at least a cells v2 cell of one.
enable
¶Type: | boolean |
---|---|
Default: | false |
Enable cell v1 functionality.
Note that cells v1 is considered experimental and not recommended for new Nova deployments. Cells v1 is being replaced by cells v2 which starting in the 15.0.0 Ocata release, all Nova deployments are at least a cells v2 cell of one. Setting this option, or any other options in the [cells] group, is not required for cells v2.
When this functionality is enabled, it lets you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker.
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
name
¶Type: | string |
---|---|
Default: | nova |
Name of the current cell.
This value must be unique for each cell. Name of a cell is used as its id, leaving this option unset or setting the same name for two or more cells may cause unexpected behaviour.
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
capabilities
¶Type: | list |
---|---|
Default: | hypervisor=xenserver;kvm,os=linux;windows |
Cell capabilities.
List of arbitrary key=value pairs defining capabilities of the current cell to be sent to the parent cells. These capabilities are intended to be used in cells scheduler filters/weighers.
Possible values:
hypervisor=xenserver;kvm,os=linux;windows
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
call_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Call timeout.
Cell messaging module waits for response(s) to be put into the eventlet queue. This option defines the seconds waited for response from a call to a cell.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
reserve_percent
¶Type: | floating point |
---|---|
Default: | 10.0 |
Reserve percentage
Percentage of cell capacity to hold in reserve, so the minimum amount of free resource is considered to be;
min_free = total * (reserve_percent / 100.0)
This option affects both memory and disk utilization.
The primary purpose of this reserve is to ensure some space is available for users who want to resize their instance to be larger. Note that currently once the capacity expands into this reserve space this option is ignored.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
cell_type
¶Type: | string |
---|---|
Default: | compute |
Valid Values: | api, compute |
Type of cell.
When cells feature is enabled the hosts in the OpenStack Compute
cloud are partitioned into groups. Cells are configured as a tree.
The top-level cell's cell_type must be set to api
. All other
cells are defined as a compute cell
by default.
Related option:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
mute_child_interval
¶Type: | integer |
---|---|
Default: | 300 |
Mute child interval.
Number of seconds after which a lack of capability and capacity update the child cell is to be treated as a mute cell. Then the child cell will be weighed as recommend highly that it be skipped.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
bandwidth_update_interval
¶Type: | integer |
---|---|
Default: | 600 |
Bandwidth update interval.
Seconds between bandwidth usage cache updates for cells.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
instance_update_sync_database_limit
¶Type: | integer |
---|---|
Default: | 100 |
Instance update sync database limit.
Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
mute_weight_multiplier
¶Type: | floating point |
---|---|
Default: | -10000.0 |
Mute weight multiplier.
Multiplier used to weigh mute children. Mute children cells are recommended to be skipped so their weight is multiplied by this negative value.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
ram_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 10.0 |
Ram weight multiplier.
Multiplier used for weighing ram. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
offset_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Offset weight multiplier
Multiplier used to weigh offset weigher. Cells with higher weight_offsets in the DB will be preferred. The weight_offset is a property of a cell stored in the database. It can be used by a deployer to have scheduling decisions favor or disfavor cells based on the setting.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
instance_updated_at_threshold
¶Type: | integer |
---|---|
Default: | 3600 |
Instance updated at threshold
Number of seconds after an instance was updated or deleted to continue to update cells. This option lets cells manager to only attempt to sync instances that have been updated recently. i.e., a threshold of 3600 means to only update instances that have modified in the last hour.
Possible values:
Related options:
instance_update_num_instances
value in a periodic task run.Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
instance_update_num_instances
¶Type: | integer |
---|---|
Default: | 1 |
Instance update num instances
On every run of the periodic task, nova cells manager will attempt to sync instance_updated_at_threshold number of instances. When the manager gets the list of instances, it shuffles them so that multiple nova-cells services do not attempt to sync the same instances in lockstep.
Possible values:
Related options:
instance_updated_at_threshold
value in a periodic task run.Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
max_hop_count
¶Type: | integer |
---|---|
Default: | 10 |
Maximum hop count
When processing a targeted message, if the local cell is not the target, a route is defined between neighbouring cells. And the message is processed across the whole routing path. This option defines the maximum hop counts until reaching the target.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
scheduler
¶Type: | string |
---|---|
Default: | nova.cells.scheduler.CellsScheduler |
Cells scheduler.
The class of the driver used by the cells scheduler. This should be the full Python path to the class to be used. If nothing is specified in this option, the CellsScheduler is used.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
rpc_driver_queue_base
¶Type: | string |
---|---|
Default: | cells.intercell |
RPC driver queue base.
When sending a message to another cell by JSON-ifying the message and making an RPC cast to 'process_message', a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
scheduler_filter_classes
¶Type: | list |
---|---|
Default: | nova.cells.filters.all_filters |
Scheduler filter classes.
Filter classes the cells scheduler should use. An entry of "nova.cells.filters.all_filters" maps to all cells filters included with nova. As of the Mitaka release the following filter classes are available:
Different cell filter: A scheduler hint of 'different_cell' with a value of a full cell name may be specified to route a build away from a particular cell.
Image properties filter: Image metadata named 'hypervisor_version_requires' with a version specification may be specified to ensure the build goes to a cell which has hypervisors of the required version. If either the version requirement on the image or the hypervisor capability of the cell is not present, this filter returns without filtering out the cells.
Target cell filter: A scheduler hint of 'target_cell' with a value of a full cell name may be specified to route a build to a particular cell. No error handling is done as there's no way to know whether the full path is a valid.
As an admin user, you can also add a filter that directs builds to a particular cell.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
scheduler_weight_classes
¶Type: | list |
---|---|
Default: | nova.cells.weights.all_weighers |
Scheduler weight classes.
Weigher classes the cells scheduler should use. An entry of "nova.cells.weights.all_weighers" maps to all cell weighers included with nova. As of the Mitaka release the following weight classes are available:
mute_child: Downgrades the likelihood of child cells being chosen for scheduling requests, which haven't sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative).
ram_by_instance_type: Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell.
weight_offset: Allows modifying the database to weight a particular cell. The highest weight will be the first cell to be scheduled for launching an instance. When the weight_offset of a cell is set to 0, it is unlikely to be picked but it could be picked if other cells have a lower weight, like if they're full. And when the weight_offset is set to a very high value (for example, '999999999999999'), it is likely to be picked if another cell do not have a higher weight.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
scheduler_retries
¶Type: | integer |
---|---|
Default: | 10 |
Scheduler retries.
How many retries when no cells are available. Specifies how many times the scheduler tries to launch a new instance when no cells are available.
Possible values:
Related options:
scheduler_retry_delay
value
while retrying to find a suitable cell.Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
scheduler_retry_delay
¶Type: | integer |
---|---|
Default: | 2 |
Scheduler retry delay.
Specifies the delay (in seconds) between scheduling retries when no
cell can be found to place the new instance on. When the instance
could not be scheduled to a cell after scheduler_retries
in
combination with scheduler_retry_delay
, then the scheduling
of the instance failed.
Possible values:
Related options:
scheduler_retries
value
while retrying to find a suitable cell.Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
db_check_interval
¶Type: | integer |
---|---|
Default: | 60 |
DB check interval.
Cell state manager updates cell status for all cells from the DB only after this particular interval time is passed. Otherwise cached status are used. If this value is 0 or negative all cell status are updated from the DB whenever a state is needed.
Possible values:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
cells_config
¶Type: | string |
---|---|
Default: | <None> |
Optional cells configuration.
Configuration file from which to read cells configuration. If given, overrides reading cells from the database.
Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use this option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on.
The transport_url has the following form: rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
Possible values:
The scheme can be either qpid or rabbit, the following sample shows this optional configuration:
- {
- "parent": {
- "name": "parent", "api_url": "http://api.example.com:8774", "transport_url": "rabbit://rabbit.example.com", "weight_offset": 0.0, "weight_scale": 1.0, "is_parent": true
}, "cell1": {
"name": "cell1", "api_url": "http://api.example.com:8774", "transport_url": "rabbit://rabbit1.example.com", "weight_offset": 0.0, "weight_scale": 1.0, "is_parent": false}, "cell2": {
"name": "cell2", "api_url": "http://api.example.com:8774", "transport_url": "rabbit://rabbit2.example.com", "weight_offset": 0.0, "weight_scale": 1.0, "is_parent": false}
}
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | Cells v1 is being replaced with Cells v2. |
---|
catalog_info
¶Type: | string |
---|---|
Default: | volumev3:cinderv3:publicURL |
Info to match when looking for cinder in the service catalog.
Possible values:
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.
Related options:
endpoint_template
¶Type: | string |
---|---|
Default: | <None> |
If this option is set then it will override service catalog lookup with this template for cinder endpoint
Possible values:
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.
Related options:
os_region_name
¶Type: | string |
---|---|
Default: | <None> |
Region name of this node. This is used when picking the URL in the service catalog.
Possible values:
http_retries
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 0 |
Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.
Possible values:
cross_az_attach
¶Type: | boolean |
---|---|
Default: | true |
Allow attach between instance and volume in different availability zones.
If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
cinder | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
default_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
tenant_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant ID
tenant_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant Name
consecutive_build_service_disable_threshold
¶Type: | integer |
---|---|
Default: | 10 |
Enables reporting of build failures to the scheduler.
Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher.
Possible values:
Related options:
resource_provider_association_refresh
¶Type: | integer |
---|---|
Default: | 300 |
Minimum Value: | 1 |
Interval for updating nova-compute-side cache of the compute node resource provider's aggregates and traits info.
This option specifies the number of seconds between attempts to update a provider's aggregates and traits information in the local cache of the compute node.
Possible values:
live_migration_wait_for_vif_plug
¶Type: | boolean |
---|---|
Default: | false |
Determine if the source compute host should wait for a network-vif-plugged
event from the (neutron) networking service before starting the actual transfer
of the guest to the destination compute host.
If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this.
Before starting the transfer of the guest, some setup occurs on the destination
compute host, including plugging virtual interfaces. Depending on the
networking backend on the destination host, a network-vif-plugged
event may be triggered and then received on the source compute host and the
source compute can wait for that event to ensure networking is set up on the
destination host before starting the guest transfer in the hypervisor.
By default, this is False for two reasons:
port.binding:vif_type
) will send network-vif-plugged
events without an accompanying port binding:host_id
change.
Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least
one known backend that will not currently work in this case, see bug
https://launchpad.net/bugs/1755890 for more details.Possible values:
network-vif-plugged
events before starting guest transfernetwork-vif-plugged
events before starting guest
transfer (this is how things have always worked before this option
was introduced)Related options:
live_migration_wait_for_vif_plug
is
True and vif_plugging_timeout
is greater than 0, and a timeout is
reached, the live migration process will fail with an error but the guest
transfer will not have started to the destination hostlive_migration_wait_for_vif_plug
is
True, this controls the amount of time to wait before timing out and either
failing if vif_plugging_is_fatal
is True, or simply continuing with the
live migrationOptions under this group are used to define Conductor's communication, which manager should be act as a proxy between computes and database, and finally, how many worker processes will be used.
topic
¶Type: | string |
---|---|
Default: | conductor |
Topic exchange name on which conductor nodes listen.
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | There is no need to let users choose the RPC topic for all services - there is little gain from this. Furthermore, it makes it really easy to break Nova by using this option. |
---|
workers
¶Type: | integer |
---|---|
Default: | <None> |
Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.
Options under this group allow to tune the configuration of the console proxy
service.
Note: in configuration of every compute is a console_host
option,
which allows to select the console proxy service to connect to.
allowed_origins
¶Type: | list |
---|---|
Default: | '' |
Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header.
Possible values:
Group | Name |
---|---|
DEFAULT | console_allowed_origins |
token_ttl
¶Type: | integer |
---|---|
Default: | 600 |
Minimum Value: | 0 |
The lifetime of a console auth token (in seconds).
A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted.
Group | Name |
---|---|
DEFAULT | console_token_ttl |
allowed_origin
¶Type: | list |
---|---|
Default: | <None> |
Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com
allow_credentials
¶Type: | boolean |
---|---|
Default: | true |
Indicate that the actual request can include user credentials
expose_headers
¶Type: | list |
---|---|
Default: | X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token |
Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
max_age
¶Type: | integer |
---|---|
Default: | 3600 |
Maximum cache age of CORS preflight requests.
allow_methods
¶Type: | list |
---|---|
Default: | GET,PUT,POST,DELETE,PATCH |
Indicate which methods can be used during the actual request.
allow_headers
¶Type: | list |
---|---|
Default: | X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id |
Indicate which header field names may be used during the actual request.
ca_file
¶Type: | string |
---|---|
Default: | cacert.pem |
Filename of root CA (Certificate Authority). This is a container format and includes root certificates.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | ca_file |
key_file
¶Type: | string |
---|---|
Default: | private/cakey.pem |
Filename of a private key.
Related options:
Group | Name |
---|---|
DEFAULT | key_file |
crl_file
¶Type: | string |
---|---|
Default: | crl.pem |
Filename of root Certificate Revocation List (CRL). This is a list of certificates that have been revoked, and therefore, entities presenting those (revoked) certificates should no longer be trusted.
Related options:
Group | Name |
---|---|
DEFAULT | crl_file |
keys_path
¶Type: | string |
---|---|
Default: | $state_path/keys |
Directory path where keys are located.
Related options:
Group | Name |
---|---|
DEFAULT | keys_path |
ca_path
¶Type: | string |
---|---|
Default: | $state_path/CA |
Directory path where root CA is located.
Related options:
Group | Name |
---|---|
DEFAULT | ca_path |
use_project_ca
¶Type: | boolean |
---|---|
Default: | false |
Option to enable/disable use of CA for each project.
Group | Name |
---|---|
DEFAULT | use_project_ca |
user_cert_subject
¶Type: | string |
---|---|
Default: | /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s |
Subject for certificate for users, {'default': 'the value above'} for project, user, timestamp
Group | Name |
---|---|
DEFAULT | user_cert_subject |
project_cert_subject
¶Type: | string |
---|---|
Default: | /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s |
Subject for certificate for projects, {'default': 'the value above'} for project, timestamp
Group | Name |
---|---|
DEFAULT | project_cert_subject |
use_tpool
¶Type: | boolean |
---|---|
Default: | false |
Enable the experimental use of thread pooling for all DB API calls
Group | Name |
---|---|
DEFAULT | dbapi_use_tpool |
sqlite_synchronous
¶Type: | boolean |
---|---|
Default: | true |
If True, SQLite uses synchronous mode.
Group | Name |
---|---|
DEFAULT | sqlite_synchronous |
backend
¶Type: | string |
---|---|
Default: | sqlalchemy |
The back end to use for the database.
Group | Name |
---|---|
DEFAULT | db_backend |
connection
¶Type: | string |
---|---|
Default: | <None> |
The SQLAlchemy connection string to use to connect to the database.
Group | Name |
---|---|
DEFAULT | sql_connection |
DATABASE | sql_connection |
sql | connection |
slave_connection
¶Type: | string |
---|---|
Default: | <None> |
The SQLAlchemy connection string to use to connect to the slave database.
mysql_sql_mode
¶Type: | string |
---|---|
Default: | TRADITIONAL |
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
mysql_enable_ndb
¶Type: | boolean |
---|---|
Default: | false |
If True, transparently enables support for handling MySQL Cluster (NDB).
connection_recycle_time
¶Type: | integer |
---|---|
Default: | 3600 |
Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.
Group | Name |
---|---|
DATABASE | idle_timeout |
database | idle_timeout |
DEFAULT | sql_idle_timeout |
DATABASE | sql_idle_timeout |
sql | idle_timeout |
min_pool_size
¶Type: | integer |
---|---|
Default: | 1 |
Minimum number of SQL connections to keep open in a pool.
Group | Name |
---|---|
DEFAULT | sql_min_pool_size |
DATABASE | sql_min_pool_size |
max_pool_size
¶Type: | integer |
---|---|
Default: | 5 |
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.
Group | Name |
---|---|
DEFAULT | sql_max_pool_size |
DATABASE | sql_max_pool_size |
max_retries
¶Type: | integer |
---|---|
Default: | 10 |
Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
Group | Name |
---|---|
DEFAULT | sql_max_retries |
DATABASE | sql_max_retries |
retry_interval
¶Type: | integer |
---|---|
Default: | 10 |
Interval between retries of opening a SQL connection.
Group | Name |
---|---|
DEFAULT | sql_retry_interval |
DATABASE | reconnect_interval |
max_overflow
¶Type: | integer |
---|---|
Default: | 50 |
If set, use this value for max_overflow with SQLAlchemy.
Group | Name |
---|---|
DEFAULT | sql_max_overflow |
DATABASE | sqlalchemy_max_overflow |
connection_debug
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Maximum Value: | 100 |
Verbosity of SQL debugging information: 0=None, 100=Everything.
Group | Name |
---|---|
DEFAULT | sql_connection_debug |
connection_trace
¶Type: | boolean |
---|---|
Default: | false |
Add Python stack traces to SQL as comment strings.
Group | Name |
---|---|
DEFAULT | sql_connection_trace |
pool_timeout
¶Type: | integer |
---|---|
Default: | <None> |
If set, use this value for pool_timeout with SQLAlchemy.
Group | Name |
---|---|
DATABASE | sqlalchemy_pool_timeout |
use_db_reconnect
¶Type: | boolean |
---|---|
Default: | false |
Enable the experimental use of database reconnect on connection lost.
db_retry_interval
¶Type: | integer |
---|---|
Default: | 1 |
Seconds between retries of a database transaction.
db_inc_retry_interval
¶Type: | boolean |
---|---|
Default: | true |
If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retry_interval
¶Type: | integer |
---|---|
Default: | 10 |
If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_max_retries
¶Type: | integer |
---|---|
Default: | 20 |
Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
enabled_vgpu_types
¶Type: | list |
---|---|
Default: | '' |
A list of the vGPU types enabled in the compute node.
Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance. But please note that Nova only supports a single type in the Queens release. If more than one vGPU type is specified (as a comma-separated list), only the first one will be used. An example is as the following:
[devices] enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11
enabled
¶Type: | boolean |
---|---|
Default: | false |
Enables/disables LVM ephemeral storage encryption.
cipher
¶Type: | string |
---|---|
Default: | aes-xts-plain64 |
Cipher-mode string to be used.
The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "<cipher>-<chainmode>-<ivmode>".
Possible values:
/proc/crypto
.key_size
¶Type: | integer |
---|---|
Default: | 512 |
Minimum Value: | 1 |
Encryption key length in bits.
The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key.
host_subset_size
¶Type: | integer |
---|---|
Default: | 1 |
Minimum Value: | 1 |
Size of subset of best hosts selected by scheduler.
New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option.
Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Group | Name |
---|---|
DEFAULT | scheduler_host_subset_size |
max_io_ops_per_host
¶Type: | integer |
---|---|
Default: | 8 |
The number of instances that can be actively performing IO on a host.
Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops_filter' filter is enabled.
Possible values:
Group | Name |
---|---|
DEFAULT | max_io_ops_per_host |
max_instances_per_host
¶Type: | integer |
---|---|
Default: | 50 |
Minimum Value: | 1 |
Maximum number of instances that be active on a host.
If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The num_instances_filter will reject any host that has at least as many instances as this option's value.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'num_instances_filter' filter is enabled.
Possible values:
Group | Name |
---|---|
DEFAULT | max_instances_per_host |
track_instance_changes
¶Type: | boolean |
---|---|
Default: | true |
Enable querying of individual hosts for instance information.
The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host.
If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the [workarounds]/disable_group_policy_check_upcall option.
Group | Name |
---|---|
DEFAULT | scheduler_tracks_instance_changes |
available_filters
¶Type: | multi-valued |
---|---|
Default: | nova.scheduler.filters.all_filters |
Filters that the scheduler can use.
An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the 'enabled_filters' option will be used, but any filter appearing in that option must also be included in this list.
By default, this is set to all filters that are included with nova.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | scheduler_available_filters |
enabled_filters
¶Type: | list |
---|---|
Default: | RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter |
Filters that the scheduler will use.
An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | scheduler_default_filters |
baremetal_enabled_filters
¶Type: | list |
---|---|
Default: | RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter |
Filters used for filtering baremetal hosts.
Filters are applied in order, so place your most restrictive filters first to make the filtering process more efficient.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | baremetal_scheduler_default_filters |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | These filters were used to overcome some of the baremetal scheduling limitations in Nova prior to the use of the Placement API. Now scheduling will use the custom resource class defined for each baremetal node to make its selection. |
---|
use_baremetal_filters
¶Type: | boolean |
---|---|
Default: | false |
Enable baremetal filters.
Set this to True to tell the nova scheduler that it should use the filters specified in the 'baremetal_enabled_filters' option. If you are not scheduling baremetal nodes, leave this at the default setting of False.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Related options:
Group | Name |
---|---|
DEFAULT | scheduler_use_baremetal_filters |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | These filters were used to overcome some of the baremetal scheduling limitations in Nova prior to the use of the Placement API. Now scheduling will use the custom resource class defined for each baremetal node to make its selection. |
---|
weight_classes
¶Type: | list |
---|---|
Default: | nova.scheduler.weights.all_weighers |
Weighers that the scheduler will use.
Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is 'scheduler_host_subset_size'.
By default, this is set to all weighers that are included with Nova.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Group | Name |
---|---|
DEFAULT | scheduler_weight_classes |
ram_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Ram weight multipler ratio.
This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'ram' weigher is enabled.
Possible values:
Group | Name |
---|---|
DEFAULT | ram_weight_multiplier |
disk_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Disk weight multipler ratio.
Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'disk' weigher is enabled.
Possible values:
Group | Name |
---|---|
DEFAULT | disk_weight_multiplier |
io_ops_weight_multiplier
¶Type: | floating point |
---|---|
Default: | -1.0 |
IO operations weight multipler ratio.
This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops' weigher is enabled.
Possible values:
Group | Name |
---|---|
DEFAULT | io_ops_weight_multiplier |
pci_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Minimum Value: | 0.0 |
PCI device affinity weight multiplier.
The PCI device affinity weighter computes a weighting based on the number of
PCI devices on the host and the number of PCI devices requested by the
instance. The NUMATopologyFilter
filter must be enabled for this to have
any significance. For more information, refer to the filter documentation:
Possible values:
soft_affinity_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Multiplier used for weighing hosts for group soft-affinity.
Possible values:
Group | Name |
---|---|
DEFAULT | soft_affinity_weight_multiplier |
soft_anti_affinity_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
Multiplier used for weighing hosts for group soft-anti-affinity.
Possible values:
Group | Name |
---|---|
DEFAULT | soft_anti_affinity_weight_multiplier |
build_failure_weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1000000.0 |
Multiplier used for weighing hosts that have had recent build failures.
This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
shuffle_best_same_weighed_hosts
¶Type: | boolean |
---|---|
Default: | false |
Enable spreading the instances between hosts with the same best weight.
Enabling it is beneficial for cases when host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense.
image_properties_default_architecture
¶Type: | string |
---|---|
Default: | <None> |
Valid Values: | alpha, armv6, armv7l, armv7b, aarch64, cris, i686, ia64, lm32, m68k, microblaze, microblazeel, mips, mipsel, mips64, mips64el, openrisc, parisc, parisc64, ppc, ppcle, ppc64, ppc64le, ppcemb, s390, s390x, sh4, sh4eb, sparc, sparc64, unicore32, x86_64, xtensa, xtensaeb |
The default architecture to be used when using the image properties filter.
When using the ImagePropertiesFilter, it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on aarch64 compute nodes because the user did not specify the 'hw_architecture' property in Glance.
Possible values:
isolated_images
¶Type: | list |
---|---|
Default: | '' |
List of UUIDs for images that can only be run on certain hosts.
If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | isolated_images |
isolated_hosts
¶Type: | list |
---|---|
Default: | '' |
List of hosts that can only run certain images.
If there is a need to restrict some images to only run on certain designated hosts, list those host names here.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | isolated_hosts |
restrict_isolated_hosts_to_isolated_images
¶Type: | boolean |
---|---|
Default: | true |
Prevent non-isolated images from being built on isolated hosts.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even then, this option doesn't affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.
Related options:
Group | Name |
---|---|
DEFAULT | restrict_isolated_hosts_to_isolated_images |
aggregate_image_properties_isolation_namespace
¶Type: | string |
---|---|
Default: | <None> |
Image property namespace for use in the host aggregate.
Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | aggregate_image_properties_isolation_namespace |
aggregate_image_properties_isolation_separator
¶Type: | string |
---|---|
Default: | . |
Separator character(s) for image property namespace and name.
When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | aggregate_image_properties_isolation_separator |
Configuration options for the Image service
api_servers
¶Type: | list |
---|---|
Default: | <None> |
List of glance api servers endpoints available to nova.
https is used for ssl-based glance api servers.
NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason.
Possible values:
num_retries
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Enable glance operation retries.
Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries.
allowed_direct_url_schemes
¶Type: | list |
---|---|
Default: | '' |
List of url schemes that can be directly accessed.
This option specifies a list of url schemes that can be downloaded directly via the direct_url. This direct_URL can be fetched from Image metadata which can be used by nova to get the image more efficiently. nova-compute could benefit from this by invoking a copy when it has access to the same file system as glance.
Possible values:
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | This was originally added for the 'nova.image.download.file' FileTransfer extension which was removed in the 16.0.0 Pike release. The 'nova.image.download.modules' extension point is not maintained and there is no indication of its use in production clouds. |
---|
verify_glance_signatures
¶Type: | boolean |
---|---|
Default: | false |
Enable image signature verification.
nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers.
Related options:
enable_certificate_validation
¶Type: | boolean |
---|---|
Default: | false |
Enable certificate validation for image signature verification.
During image signature verification nova will first verify the validity of the image's signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the image will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy.
Related options:
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together. |
---|
default_trusted_certificate_ids
¶Type: | list |
---|---|
Default: | '' |
List of certificate IDs for certificates that should be trusted.
May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail.
Related options:
debug
¶Type: | boolean |
---|---|
Default: | false |
Enable or disable debug logging with glanceclient.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
service_type
¶Type: | string |
---|---|
Default: | image |
The default service_type for endpoint URL discovery.
service_name
¶Type: | string |
---|---|
Default: | <None> |
The default service_name for endpoint URL discovery.
valid_interfaces
¶Type: | list |
---|---|
Default: | internal,public |
List of interfaces, in order of preference, for endpoint URL.
region_name
¶Type: | string |
---|---|
Default: | <None> |
The default region_name for endpoint URL discovery.
endpoint_override
¶Type: | string |
---|---|
Default: | <None> |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks and resizing disks.
debug
¶Type: | boolean |
---|---|
Default: | false |
Enable/disables guestfs logging.
This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed.
Related options: Since libguestfs access and modifies VM's managed by libvirt, below options should be set to give access to those VM's.
- libvirt.inject_key
- libvirt.inject_partition
- libvirt.inject_password
path
¶Type: | string |
---|---|
Default: | /healthcheck |
The path to respond to healtcheck requests on.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
detailed
¶Type: | boolean |
---|---|
Default: | false |
Show more detailed information as part of the response
backends
¶Type: | list |
---|---|
Default: | '' |
Additional backends that can perform health checks and report that information back as part of a request.
disable_by_file_path
¶Type: | string |
---|---|
Default: | <None> |
Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin.
disable_by_file_paths
¶Type: | list |
---|---|
Default: | '' |
Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin.
The hyperv feature allows you to configure the Hyper-V hypervisor driver to be used within an OpenStack deployment.
dynamic_memory_ratio
¶Type: | floating point |
---|---|
Default: | 1.0 |
Dynamic memory ratio
Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup.
Possible values:
enable_instance_metrics_collection
¶Type: | boolean |
---|---|
Default: | false |
Enable instance metrics collection
Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer.
Type: | string |
---|---|
Default: | '' |
Instances path share
The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally.
Possible values:
Related options:
limit_cpu_features
¶Type: | boolean |
---|---|
Default: | false |
Limit CPU features
This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance.
mounted_disk_query_retry_count
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Mounted disk query retry count
The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached.
Possible values:
Related options:
mounted_disk_query_retry_interval
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 0 |
Mounted disk query retry interval
Interval between checks for a mounted disk, in seconds.
Possible values:
Related options:
power_state_check_timeframe
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Power state check timeframe
The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe.
Possible values:
power_state_event_polling_interval
¶Type: | integer |
---|---|
Default: | 2 |
Minimum Value: | 0 |
Power state event polling interval
Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value.
Possible values:
qemu_img_cmd
¶Type: | string |
---|---|
Default: | qemu-img.exe |
qemu-img command
qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value.
Possible values:
Related options:
vswitch_name
¶Type: | string |
---|---|
Default: | <None> |
External virtual switch name
The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private).
Possible values:
wait_soft_reboot_seconds
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Wait soft reboot seconds
Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.
Possible values:
config_drive_cdrom
¶Type: | boolean |
---|---|
Default: | false |
Configuration drive cdrom
OpenStack can be configured to write instance metadata to a configuration drive, which is then attached to the instance before it boots. The configuration drive can be attached as a disk drive (default) or as a CD drive.
Possible values:
Related options:
config_drive_inject_password
¶Type: | boolean |
---|---|
Default: | false |
Configuration drive inject password
Enables setting the admin password in the configuration drive image.
Related options:
volume_attach_retry_count
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Volume attach retry count
The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached.
Possible values:
Related options:
volume_attach_retry_interval
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 0 |
Volume attach retry interval
Interval between volume attachment attempts, in seconds.
Possible values:
Related options:
enable_remotefx
¶Type: | boolean |
---|---|
Default: | false |
Enable RemoteFX feature
This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled.
Instances with RemoteFX can be requested with the following flavor extra specs:
os:resolution. Guest VM screen resolution size. Acceptable values:
1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160
3840x2160
is only available on Windows / Hyper-V Server 2016.
os:monitors. Guest VM number of monitors. Acceptable values:
[1, 4] - Windows / Hyper-V Server 2012 R2
[1, 8] - Windows / Hyper-V Server 2016
os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:
64, 128, 256, 512, 1024
use_multipath_io
¶Type: | boolean |
---|---|
Default: | false |
Use multipath connections when attaching iSCSI or FC disks.
This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices.
iscsi_initiator_list
¶Type: | list |
---|---|
Default: | '' |
List of iSCSI initiators that will be used for estabilishing iSCSI sessions.
If none are specified, the Microsoft iSCSI initiator service will choose the initiator.
Configuration options for Ironic driver (Bare Metal). If using the Ironic driver following options must be set: * auth_type * auth_url * project_name * username * password * project_domain_id or project_domain_name * user_domain_id or user_domain_name
api_endpoint
¶Type: | URI |
---|---|
Default: | http://ironic.example.org:6385/ |
URL override for the Ironic API endpoint.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, api_endpoint will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead. |
---|
api_max_retries
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
The number of times to retry when a request conflicts. If set to 0, only try once, no retries.
Related options:
api_retry_interval
¶Type: | integer |
---|---|
Default: | 2 |
Minimum Value: | 0 |
The number of seconds to wait before retrying the request.
Related options:
serial_console_state_timeout
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
ironic | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
service_type
¶Type: | string |
---|---|
Default: | baremetal |
The default service_type for endpoint URL discovery.
service_name
¶Type: | string |
---|---|
Default: | <None> |
The default service_name for endpoint URL discovery.
valid_interfaces
¶Type: | list |
---|---|
Default: | internal,public |
List of interfaces, in order of preference, for endpoint URL.
region_name
¶Type: | string |
---|---|
Default: | <None> |
The default region_name for endpoint URL discovery.
endpoint_override
¶Type: | string |
---|---|
Default: | <None> |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
Group | Name |
---|---|
ironic | api_endpoint |
fixed_key
¶Type: | string |
---|---|
Default: | <None> |
Fixed key returned by key manager, specified in hex.
Possible values:
Group | Name |
---|---|
keymgr | fixed_key |
backend
¶Type: | string |
---|---|
Default: | barbican |
Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time.
Group | Name |
---|---|
key_manager | api_class |
auth_type
¶Type: | string |
---|---|
Default: | <None> |
The type of authentication credential to create. Possible values are 'token', 'password', 'keystone_token', and 'keystone_password'. Required if no context is passed to the credential factory.
token
¶Type: | string |
---|---|
Default: | <None> |
Token for authentication. Required for 'token' and 'keystone_token' auth_type if no context is passed to the credential factory.
username
¶Type: | string |
---|---|
Default: | <None> |
Username for authentication. Required for 'password' auth_type. Optional for the 'keystone_password' auth_type.
password
¶Type: | string |
---|---|
Default: | <None> |
Password for authentication. Required for 'password' and 'keystone_password' auth_type.
auth_url
¶Type: | string |
---|---|
Default: | <None> |
Use this endpoint to connect to Keystone.
user_id
¶Type: | string |
---|---|
Default: | <None> |
User ID for authentication. Optional for 'keystone_token' and 'keystone_password' auth_type.
user_domain_id
¶Type: | string |
---|---|
Default: | <None> |
User's domain ID for authentication. Optional for 'keystone_token' and 'keystone_password' auth_type.
user_domain_name
¶Type: | string |
---|---|
Default: | <None> |
User's domain name for authentication. Optional for 'keystone_token' and 'keystone_password' auth_type.
trust_id
¶Type: | string |
---|---|
Default: | <None> |
Trust ID for trust scoping. Optional for 'keystone_token' and 'keystone_password' auth_type.
domain_id
¶Type: | string |
---|---|
Default: | <None> |
Domain ID for domain scoping. Optional for 'keystone_token' and 'keystone_password' auth_type.
domain_name
¶Type: | string |
---|---|
Default: | <None> |
Domain name for domain scoping. Optional for 'keystone_token' and 'keystone_password' auth_type.
project_id
¶Type: | string |
---|---|
Default: | <None> |
Project ID for project scoping. Optional for 'keystone_token' and 'keystone_password' auth_type.
project_name
¶Type: | string |
---|---|
Default: | <None> |
Project name for project scoping. Optional for 'keystone_token' and 'keystone_password' auth_type.
project_domain_id
¶Type: | string |
---|---|
Default: | <None> |
Project's domain ID for project. Optional for 'keystone_token' and 'keystone_password' auth_type.
project_domain_name
¶Type: | string |
---|---|
Default: | <None> |
Project's domain name for project. Optional for 'keystone_token' and 'keystone_password' auth_type.
reauthenticate
¶Type: | boolean |
---|---|
Default: | true |
Allow fetching a new token if the current one is going to expire. Optional for 'keystone_token' and 'keystone_password' auth_type.
Configuration options for the identity service
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
service_type
¶Type: | string |
---|---|
Default: | identity |
The default service_type for endpoint URL discovery.
service_name
¶Type: | string |
---|---|
Default: | <None> |
The default service_name for endpoint URL discovery.
valid_interfaces
¶Type: | list |
---|---|
Default: | internal,public |
List of interfaces, in order of preference, for endpoint URL.
region_name
¶Type: | string |
---|---|
Default: | <None> |
The default region_name for endpoint URL discovery.
endpoint_override
¶Type: | string |
---|---|
Default: | <None> |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
www_authenticate_uri
¶Type: | string |
---|---|
Default: | <None> |
Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.
Group | Name |
---|---|
keystone_authtoken | auth_uri |
auth_uri
¶Type: | string |
---|---|
Default: | <None> |
Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release.
Warning
This option is deprecated for removal since Queens. Its value may be silently ignored in the future.
Reason: | The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. |
---|
auth_version
¶Type: | string |
---|---|
Default: | <None> |
API version of the admin Identity API endpoint.
delay_auth_decision
¶Type: | boolean |
---|---|
Default: | false |
Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
http_connect_timeout
¶Type: | integer |
---|---|
Default: | <None> |
Request timeout value for communicating with Identity API server.
http_request_max_retries
¶Type: | integer |
---|---|
Default: | 3 |
How many times are we trying to reconnect when communicating with Identity API Server.
cache
¶Type: | string |
---|---|
Default: | <None> |
Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers
option instead.
certfile
¶Type: | string |
---|---|
Default: | <None> |
Required if identity server requires client certificate
keyfile
¶Type: | string |
---|---|
Default: | <None> |
Required if identity server requires client certificate
cafile
¶Type: | string |
---|---|
Default: | <None> |
A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
region_name
¶Type: | string |
---|---|
Default: | <None> |
The region in which the identity server can be found.
signing_dir
¶Type: | string |
---|---|
Default: | <None> |
Directory used to cache files related to PKI tokens. This option has been deprecated in the Ocata release and will be removed in the P release.
Warning
This option is deprecated for removal since Ocata. Its value may be silently ignored in the future.
Reason: | PKI token format is no longer supported. |
---|
memcached_servers
¶Type: | list |
---|---|
Default: | <None> |
Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
Group | Name |
---|---|
keystone_authtoken | memcache_servers |
token_cache_time
¶Type: | integer |
---|---|
Default: | 300 |
In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.
revocation_cache_time
¶Type: | integer |
---|---|
Default: | 10 |
Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance. Only valid for PKI tokens. This option has been deprecated in the Ocata release and will be removed in the P release.
Warning
This option is deprecated for removal since Ocata. Its value may be silently ignored in the future.
Reason: | PKI token format is no longer supported. |
---|
memcache_security_strategy
¶Type: | string |
---|---|
Default: | None |
Valid Values: | None, MAC, ENCRYPT |
(Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_secret_key
¶Type: | string |
---|---|
Default: | <None> |
(Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_pool_dead_retry
¶Type: | integer |
---|---|
Default: | 300 |
(Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize
¶Type: | integer |
---|---|
Default: | 10 |
(Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout
¶Type: | integer |
---|---|
Default: | 3 |
(Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout
¶Type: | integer |
---|---|
Default: | 60 |
(Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_pool_conn_get_timeout
¶Type: | integer |
---|---|
Default: | 10 |
(Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
memcache_use_advanced_pool
¶Type: | boolean |
---|---|
Default: | false |
(Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
include_service_catalog
¶Type: | boolean |
---|---|
Default: | true |
(Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
enforce_token_bind
¶Type: | string |
---|---|
Default: | permissive |
Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
check_revocations_for_cached
¶Type: | boolean |
---|---|
Default: | false |
If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
Warning
This option is deprecated for removal since Ocata. Its value may be silently ignored in the future.
Reason: | PKI token format is no longer supported. |
---|
hash_algorithms
¶Type: | list |
---|---|
Default: | md5 |
Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
Warning
This option is deprecated for removal since Ocata. Its value may be silently ignored in the future.
Reason: | PKI token format is no longer supported. |
---|
service_token_roles
¶Type: | list |
---|---|
Default: | service |
A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check.
service_token_roles_required
¶Type: | boolean |
---|---|
Default: | false |
For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible.
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
keystone_authtoken | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
Libvirt options allows cloud administrator to configure related
libvirt hypervisor driver to be used within an OpenStack deployment.
Almost all of the libvirt config options are influence by virt_type
config
which describes the virtualization type (or so called domain type) libvirt
should use for specific features such as live migration, snapshot.
rescue_image_id
¶Type: | string |
---|---|
Default: | <None> |
The ID of the image to boot from to rescue data from a corrupted instance.
If the rescue REST API operation doesn't provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used.
Possible values:
rescue_kernel_id
and rescue_ramdisk_id
too. If nothing is set, the image of the instance
is used.Related options:
rescue_kernel_id
: If the chosen rescue image allows the separate
definition of its kernel disk, the value of this option is used,
if specified. This is the case when Amazon's AMI/AKI/ARI image
format is used for the rescue image.rescue_ramdisk_id
: If the chosen rescue image allows the separate
definition of its RAM disk, the value of this option is used if,
specified. This is the case when Amazon's AMI/AKI/ARI image
format is used for the rescue image.rescue_kernel_id
¶Type: | string |
---|---|
Default: | <None> |
The ID of the kernel (AKI) image to use with the rescue image.
If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.
Possible values:
Related options:
rescue_image_id
: If that option points to an image in Amazon's
AMI/AKI/ARI image format, it's useful to use rescue_kernel_id
too.rescue_ramdisk_id
¶Type: | string |
---|---|
Default: | <None> |
The ID of the RAM disk (ARI) image to use with the rescue image.
If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.
Possible values:
Related options:
rescue_image_id
: If that option points to an image in Amazon's
AMI/AKI/ARI image format, it's useful to use rescue_ramdisk_id
too.virt_type
¶Type: | string |
---|---|
Default: | kvm |
Valid Values: | kvm, lxc, qemu, uml, xen, parallels |
Describes the virtualization type (or so called domain type) libvirt should use.
The choice of this type must match the underlying virtualization strategy you have chosen for this host.
Possible values:
Related options:
connection_uri
: depends on thisdisk_prefix
: depends on thiscpu_mode
: depends on thiscpu_model
: depends on thisconnection_uri
¶Type: | string |
---|---|
Default: | '' |
Overrides the default libvirt URI of the chosen virtualization type.
If set, Nova will use this URI to connect to libvirt.
Possible values:
qemu:///system
or xen+ssh://oirase/
for example.
This is only necessary if the URI differs to the commonly known URIs
for the chosen virtualization type.Related options:
virt_type
: Influences what is used as default value here.inject_password
¶Type: | boolean |
---|---|
Default: | false |
Allow the injection of an admin password for instance only at create
and
rebuild
process.
There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won't be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume.
Possible values:
admin password will be silently ignored.
Related options:
inject_partition
: That option will decide about the discovery and usage
of the file system. It also can disable the injection at all.inject_key
¶Type: | boolean |
---|---|
Default: | false |
Allow the injection of an SSH key at boot time.
There is no agent needed within the image to do this. If libguestfs is
available on the host, it will be used. Otherwise nbd is used. The file
system of the image will be mounted and the SSH key, which is provided
in the REST API call will be injected as SSH key for the root user and
appended to the authorized_keys
of that user. The SELinux context will
be set if necessary. Be aware that the injection is not possible when the
instance gets launched from a volume.
This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service.
Related options:
inject_partition
: That option will decide about the discovery and usage
of the file system. It also can disable the injection at all.inject_partition
¶Type: | integer |
---|---|
Default: | -2 |
Minimum Value: | -2 |
Determines the way how the file system is chosen to inject data into it.
libguestfs will be used a first solution to inject data. If that's not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won't be boot.
Possible values:
Related options:
inject_key
: If this option allows the injection of a SSH key it depends
on value greater or equal to -1 for inject_partition
.inject_password
: If this option allows the injection of an admin password
it depends on value greater or equal to -1 for inject_partition
.guestfs
You can enable the debug log level of libguestfs with this
config option. A more verbose output will help in debugging issues.virt_type
: If you use lxc
as virt_type it will be treated as a
single partition imageuse_usb_tablet
¶Type: | boolean |
---|---|
Default: | true |
Enable a mouse cursor within a graphical VNC or SPICE sessions.
This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn't support a graphical framebuffer, then it is valid to set this to False.
Related options:
* [vnc]enabled
: If VNC is enabled, use_usb_tablet
will have an effect.
* [spice]enabled
+ [spice].agent_enabled
: If SPICE is enabled and the
spice agent is disabled, the config value ofuse_usb_tablet
will have an effect.
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | This option is being replaced by the 'pointer_model' option. |
---|
live_migration_inbound_addr
¶Type: | string |
---|---|
Default: | <None> |
The IP address or hostname to be used as the target for live migration traffic.
If this option is set to None, the hostname of the migration target compute node will be used.
This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network.
Possible values:
Related options:
live_migration_tunnelled
: The live_migration_inbound_addr value is
ignored if tunneling is enabled.live_migration_uri
¶Type: | string |
---|---|
Default: | <None> |
Live migration target URI to use.
Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname.
If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list:
Related options:
live_migration_inbound_addr
: If live_migration_inbound_addr
value
is not None and live_migration_tunnelled
is False, the ip/hostname
address of target compute node is used instead of live_migration_uri
as
the uri for live migration.live_migration_scheme
: If live_migration_uri
is not set, the scheme
used for live migration is taken from live_migration_scheme
instead.Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: live_migration_scheme and live_migration_inbound_addr respectively. |
---|
live_migration_scheme
¶Type: | string |
---|---|
Default: | <None> |
URI scheme used for live migration.
Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme.
Related options:
virt_type
: This option is meaningful only when virt_type
is set to
kvm or qemu.live_migration_uri
: If live_migration_uri
value is not None, the
scheme used for live migration is taken from live_migration_uri
instead.live_migration_tunnelled
¶Type: | boolean |
---|---|
Default: | false |
Enable tunnelled migration.
This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively.
Note that this option is NOT compatible with use of block migration.
Related options:
live_migration_inbound_addr
: The live_migration_inbound_addr value is
ignored if tunneling is enabled.live_migration_bandwidth
¶Type: | integer |
---|---|
Default: | 0 |
Maximum bandwidth(in MiB/s) to be used during migration.
If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details.
live_migration_downtime
¶Type: | integer |
---|---|
Default: | 500 |
Minimum Value: | 100 |
Maximum permitted downtime, in milliseconds, for live migration switchover.
Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over.
Related options:
live_migration_downtime_steps
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 3 |
Number of incremental steps to reach max downtime value.
Will be rounded up to a minimum of 3 steps.
live_migration_downtime_delay
¶Type: | integer |
---|---|
Default: | 75 |
Minimum Value: | 3 |
Time to wait, in seconds, between each step increase of the migration downtime.
Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device.
live_migration_completion_timeout
¶Type: | integer |
---|---|
Default: | 800 |
Mutable: | This option can be changed without restarting. |
Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation.
Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts.
Related options:
live_migration_progress_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Mutable: | This option can be changed without restarting. |
Time to wait, in seconds, for migration to make forward progress in transferring data before aborting the operation.
Set to 0 to disable timeouts.
This is deprecated, and now disabled by default because we have found serious bugs in this feature that caused false live-migration timeout failures. This feature will be removed or replaced in a future release.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Serious bugs found in this feature. |
---|
live_migration_permit_post_copy
¶Type: | boolean |
---|---|
Default: | false |
This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.
When permitted, post-copy mode will be automatically activated if a live-migration memory copy iteration does not make percentage increase of at least 10% over the last iteration.
The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete.
When using post-copy mode, if the source and destination hosts loose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide.
Related options:
- live_migration_permit_auto_converge
live_migration_permit_auto_converge
¶Type: | boolean |
---|---|
Default: | false |
This option allows nova to start live migration with auto converge on.
Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use.
Related options:
- live_migration_permit_post_copy
snapshot_image_format
¶Type: | string |
---|---|
Default: | <None> |
Valid Values: | raw, qcow2, vmdk, vdi |
Determine the snapshot image format when sending to the image service.
If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image.
Possible values:
raw
: RAW disk formatqcow2
: KVM default disk formatvmdk
: VMWare default disk formatvdi
: VirtualBox default disk formatdisk_prefix
¶Type: | string |
---|---|
Default: | <None> |
Override the default disk prefix for the devices attached to an instance.
If set, this is used to identify a free disk device name for a bus.
Possible values:
Related options:
virt_type
: Influences which device type is used, which determines
the default disk prefix.wait_soft_reboot_seconds
¶Type: | integer |
---|---|
Default: | 120 |
Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.
cpu_mode
¶Type: | string |
---|---|
Default: | <None> |
Valid Values: | host-model, host-passthrough, custom, none |
Is used to set the CPU mode an instance should have.
If virt_type="kvm|qemu", it will default to "host-model", otherwise it will default to "none".
Possible values:
host-model
: Clones the host CPU feature flagshost-passthrough
: Use the host CPU model exactlycustom
: Use a named CPU modelnone
: Don't set a specific CPU model. For instances withvirt_type
as KVM/QEMU, the default CPU model from QEMU will be used,
which provides a basic set of CPU features that are compatible with most
hosts.
Related options:
cpu_model
: This should be set ONLY when cpu_mode
is set tocustom
. Otherwise, it would result in an error and the instance
launch will fail.
cpu_model
¶Type: | string |
---|---|
Default: | <None> |
Set the name of the libvirt CPU model the instance should use.
Possible values:
/usr/share/libvirt/cpu_map.xml
Related options:
cpu_mode
: This should be set to custom
ONLY when you want toconfigure (via cpu_model
) a specific named CPU model. Otherwise, it
would result in an error and the instance launch will fail.
virt_type
: Only the virtualization types kvm
and qemu
use this.cpu_model_extra_flags
¶Type: | list |
---|---|
Default: | '' |
This allows specifying granular CPU feature flags when specifying CPU
models. For example, to explicitly specify the pcid
(Process-Context ID, an Intel processor feature) flag to the "IvyBridge"
virtual CPU model:
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_model_extra_flags = pcid
Currently, the choice is restricted to a few options: pcid
,
ssbd
, virt-ssbd
, amd-ssbd
, and amd-no-ssb
(the options
are case-insensitive, so PCID
is also valid, for example). These
flags are now required to address the guest performance degradation as
a result of applying the "Meltdown" CVE fixes (pcid
) and exposure
mitigation (ssbd
and related options) on affected CPU models.
Note that when using this config attribute to set the 'PCID' and related CPU flags, not all virtual (i.e. libvirt / QEMU) CPU models need it:
For more information about ssbd
and related options,
please refer to the following security updates:
https://www.us-cert.gov/ncas/alerts/TA18-141A
https://www.redhat.com/archives/libvir-list/2018-May/msg01562.html
https://www.redhat.com/archives/libvir-list/2018-June/msg01111.html
For now, the cpu_model_extra_flags
config attribute is valid only in
combination with cpu_mode
+ cpu_model
options.
Besides custom
, the libvirt driver has two other CPU modes: The
default, host-model
, tells it to do the right thing with respect to
handling 'PCID' CPU flag for the guest -- assuming you are running
updated processor microcode, host and guest kernel, libvirt, and QEMU.
The other mode, host-passthrough
, checks if 'PCID' is available in
the hardware, and if so directly passes it through to the Nova guests.
Thus, in context of 'PCID', with either of these CPU modes
(host-model
or host-passthrough
), there is no need to use the
cpu_model_extra_flags
.
Related options:
snapshots_directory
¶Type: | string |
---|---|
Default: | $instances_path/snapshots |
Location where libvirt driver will store snapshots before uploading them to image service
xen_hvmloader_path
¶Type: | string |
---|---|
Default: | /usr/lib/xen/boot/hvmloader |
Location where the Xen hvmloader is kept
disk_cachemodes
¶Type: | list |
---|---|
Default: | '' |
Specific cache modes to use for different disk types.
For example: file=directsync,block=none,network=writeback
For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment.
Possible cache modes:
rng_dev_path
¶Type: | string |
---|---|
Default: | <None> |
A path to a device that will be used as source of entropy on the host. Permitted options are: /dev/random or /dev/hwrng
hw_machine_type
¶Type: | list |
---|---|
Default: | <None> |
For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the "virsh capabilities"command. The format of the value for this config option is host-arch=machine-type. For example: x86_64=machinetype1,armv7l=machinetype2
sysinfo_serial
¶Type: | string |
---|---|
Default: | auto |
Valid Values: | none, os, hardware, auto |
The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS.
mem_stats_period_seconds
¶Type: | integer |
---|---|
Default: | 10 |
A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.
uid_maps
¶Type: | list |
---|---|
Default: | '' |
List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5 allowed.
gid_maps
¶Type: | list |
---|---|
Default: | '' |
List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5 allowed.
realtime_scheduler_priority
¶Type: | integer |
---|---|
Default: | 1 |
In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)
enabled_perf_events
¶Type: | list |
---|---|
Default: | '' |
This is a performance event list which could be used as monitor. These events will be passed to libvirt domain xml while creating a new instances. Then event statistics data can be collected from libvirt. The minimum libvirt version is 2.0.0. For more information about Performance monitoring events, refer https://libvirt.org/formatdomain.html#elementsPerf .
Possible values:
* A string list. For example: enabled_perf_events = cmt, mbml, mbmt
The supported events list can be found in https://libvirt.org/html/libvirt-libvirt-domain.html , which you may need to search key wordsVIR_PERF_PARAM_*
images_type
¶Type: | string |
---|---|
Default: | default |
Valid Values: | raw, flat, qcow2, lvm, rbd, ploop, default |
VM Images format.
If default is specified, then use_cow_images flag is used instead of this one.
Related options:
images_volume_group
¶Type: | string |
---|---|
Default: | <None> |
LVM Volume Group that is used for VM images, when you specify images_type=lvm
Related options:
sparse_logical_volumes
¶Type: | boolean |
---|---|
Default: | false |
Create sparse logical volumes (with virtualsize) if this flag is set to True.
images_rbd_pool
¶Type: | string |
---|---|
Default: | rbd |
The RADOS pool in which rbd volumes are stored
images_rbd_ceph_conf
¶Type: | string |
---|---|
Default: | '' |
Path to the ceph configuration file to use
hw_disk_discard
¶Type: | string |
---|---|
Default: | <None> |
Valid Values: | ignore, unmap |
Discard option for nova managed disks.
Requires:
image_info_filename_pattern
¶Type: | string |
---|---|
Default: | $instances_path/$image_cache_subdirectory_name/%(image)s.info |
Allows image information files to be stored in non-standard locations
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | Image info files are no longer used by the image cache |
---|
remove_unused_resized_minimum_age_seconds
¶Type: | integer |
---|---|
Default: | 3600 |
Unused resized base images younger than this will not be removed
checksum_base_images
¶Type: | boolean |
---|---|
Default: | false |
Write a checksum for files in _base to disk
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level. |
---|
checksum_interval_seconds
¶Type: | integer |
---|---|
Default: | 3600 |
How frequently to checksum base images
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
Reason: | The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level. |
---|
volume_clear
¶Type: | string |
---|---|
Default: | zero |
Valid Values: | none, zero, shred |
Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage.
Possible values:
Related options:
lvm
volume_clear_size
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
Size of area in MiB, counting from the beginning of the allocated volume,
that will be cleared using method set in volume_clear
option.
Possible values:
Related options:
lvm
none
for this option to have any impactsnapshot_compression
¶Type: | boolean |
---|---|
Default: | false |
Enable snapshot compression for qcow2
images.
Note: you can set snapshot_image_format
to qcow2
to force all
snapshots to be in qcow2
format, independently from their original image
type.
Related options:
use_virtio_for_bridges
¶Type: | boolean |
---|---|
Default: | true |
Use virtio for bridge interfaces with KVM/QEMU
volume_use_multipath
¶Type: | boolean |
---|---|
Default: | false |
Use multipath connection of the iSCSI or FC volume
Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance.
Group | Name |
---|---|
libvirt | iscsi_use_multipath |
num_volume_scan_tries
¶Type: | integer |
---|---|
Default: | 5 |
Number of times to scan given storage protocol to find volume.
Group | Name |
---|---|
libvirt | num_iscsi_scan_tries |
num_aoe_discover_tries
¶Type: | integer |
---|---|
Default: | 3 |
Number of times to rediscover AoE target to find volume.
Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device.
iscsi_iface
¶Type: | string |
---|---|
Default: | <None> |
The iSCSI transport iface to use to connect to target in case offload support is desired.
Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name.
Group | Name |
---|---|
libvirt | iscsi_transport |
num_iser_scan_tries
¶Type: | integer |
---|---|
Default: | 5 |
Number of times to scan iSER target to find volume.
iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume.
iser_use_multipath
¶Type: | boolean |
---|---|
Default: | false |
Use multipath connection of the iSER volume.
iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance.
rbd_user
¶Type: | string |
---|---|
Default: | <None> |
The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server.
rbd_secret_uuid
¶Type: | string |
---|---|
Default: | <None> |
The libvirt UUID of the secret for the rbd_user volumes.
nfs_mount_point_base
¶Type: | string |
---|---|
Default: | $state_path/mnt |
Directory where the NFS volume is mounted on the compute node. The default is 'mnt' directory of the location where nova's Python module is installed.
NFS provides shared storage for the OpenStack Block Storage service.
Possible values:
nfs_mount_options
¶Type: | string |
---|---|
Default: | <None> |
Mount options passed to the NFS client. See section of the nfs man page for details.
Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point.
Possible values:
quobyte_mount_point_base
¶Type: | string |
---|---|
Default: | $state_path/mnt |
Directory where the Quobyte volume is mounted on the compute node.
Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted.
Possible values:
quobyte_client_cfg
¶Type: | string |
---|---|
Default: | <None> |
Path to a Quobyte Client configuration file.
smbfs_mount_point_base
¶Type: | string |
---|---|
Default: | $state_path/mnt |
Directory where the SMBFS shares are mounted on the compute node.
smbfs_mount_options
¶Type: | string |
---|---|
Default: | '' |
Mount options passed to the SMBFS client.
Provide SMBFS options as a single string containing all parameters.
See mount.cifs man page for details. Note that the libvirt-qemu uid
and gid
must be specified.
remote_filesystem_transport
¶Type: | string |
---|---|
Default: | ssh |
Valid Values: | ssh, rsync |
libvirt's transport method for remote file operations.
Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for:
vzstorage_mount_point_base
¶Type: | string |
---|---|
Default: | $state_path/mnt |
Directory where the Virtuozzo Storage clusters are mounted on the compute node.
This option defines non-standard mountpoint for Vzstorage cluster.
Related options:
vzstorage_mount_user
¶Type: | string |
---|---|
Default: | stack |
Mount owner user name.
This option defines the owner user of Vzstorage cluster mountpoint.
Related options:
vzstorage_mount_group
¶Type: | string |
---|---|
Default: | qemu |
Mount owner group name.
This option defines the owner group of Vzstorage cluster mountpoint.
Related options:
vzstorage_mount_perms
¶Type: | string |
---|---|
Default: | 0770 |
Mount access mode.
This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0's.
Related options:
vzstorage_log_path
¶Type: | string |
---|---|
Default: | /var/log/vstorage/%(cluster_name)s/nova.log.gz |
Path to vzstorage client log.
This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares.
Related options:
vzstorage_cache_path
¶Type: | string |
---|---|
Default: | <None> |
Path to the SSD cache file.
You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client's SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility.
This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares.
Related options:
vzstorage_mount_opts
¶Type: | list |
---|---|
Default: | '' |
Extra mount options for pstorage-mount
For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "['-v', '-R', '500']" Shouldn't include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.
Related options:
host
¶Type: | string |
---|---|
Default: | 127.0.0.1 |
Host to locate redis.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
port
¶Type: | port number |
---|---|
Default: | 6379 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Use this port to connect to redis host.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
password
¶Type: | string |
---|---|
Default: | '' |
Password for Redis server (optional).
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
sentinel_hosts
¶Type: | list |
---|---|
Default: | '' |
List of Redis Sentinel hosts (fault tolerance mode), e.g., [host:port, host1:port ... ]
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
sentinel_group_name
¶Type: | string |
---|---|
Default: | oslo-messaging-zeromq |
Redis replica set name.
wait_timeout
¶Type: | integer |
---|---|
Default: | 2000 |
Time in ms to wait between connection attempts.
check_timeout
¶Type: | integer |
---|---|
Default: | 20000 |
Time in ms to wait before the transaction is killed.
socket_timeout
¶Type: | integer |
---|---|
Default: | 10000 |
Timeout in ms on blocking socket operations.
Configuration options for metrics Options under this group allow to adjust how values assigned to metrics are calculated.
weight_multiplier
¶Type: | floating point |
---|---|
Default: | 1.0 |
When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
weight_setting
¶Type: | list |
---|---|
Default: | '' |
This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more 'name=ratio' pairs, separated by commas, where 'name' is the name of the metric to be weighed, and 'ratio' is the relative weight for that metric.
Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the 'weight_of_unavailable' option.
As an example, let's consider the case where this option is set to:
name1=1.0, name2=-1.3
The final weight will be:
(name1.value * 1.0) + (name2.value * -1.3)
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
required
¶Type: | boolean |
---|---|
Default: | true |
This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing.
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
Type: | floating point |
---|---|
Default: | -10000.0 |
When any of the following conditions are met, this value will be used in place of any actual metric value:
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
Possible values:
Related options:
Nova compute node uses WebMKS, a desktop sharing protocol to provide instance console access to VM's created by VMware hypervisors. Related options: Following options must be set to provide console access. * mksproxy_base_url * enabled
mksproxy_base_url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:6090/ |
Location of MKS web console proxy
The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured
Possible values:
http://host:port/
or
https://host:port/
enabled
¶Type: | boolean |
---|---|
Default: | false |
Enables graphical console access for virtual machines.
Configuration options for neutron (network connectivity as a service).
url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:9696 |
This option specifies the URL for connecting to Neutron.
Possible values:
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead. |
---|
ovs_bridge
¶Type: | string |
---|---|
Default: | br-int |
Default name for the Open vSwitch integration bridge.
Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses.
default_floating_pool
¶Type: | string |
---|---|
Default: | nova |
Default name for the floating IP pool.
Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding reponses.
extension_sync_interval
¶Type: | integer |
---|---|
Default: | 600 |
Minimum Value: | 0 |
Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait.
http_retries
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 0 |
Number of times neutronclient should retry on any failed http call.
0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.
Possible values:
service_metadata_proxy
¶Type: | boolean |
---|---|
Default: | false |
When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the 'X-Instance-ID' header.
Related options:
Type: | string |
---|---|
Default: | '' |
This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the 'X-Metadata-Provider-Signature' header must be supplied in the request.
Related options:
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
neutron | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
default_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
username
¶Type: | unknown type |
---|---|
Default: | <None> |
Username
Group | Name |
---|---|
neutron | user-name |
neutron | user_name |
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
tenant_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant ID
tenant_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant Name
service_type
¶Type: | string |
---|---|
Default: | network |
The default service_type for endpoint URL discovery.
service_name
¶Type: | string |
---|---|
Default: | <None> |
The default service_name for endpoint URL discovery.
valid_interfaces
¶Type: | list |
---|---|
Default: | internal,public |
List of interfaces, in order of preference, for endpoint URL.
region_name
¶Type: | string |
---|---|
Default: | <None> |
The default region_name for endpoint URL discovery.
endpoint_override
¶Type: | string |
---|---|
Default: | <None> |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
Most of the actions in Nova which manipulate the system state generate notifications which are posted to the messaging component (e.g. RabbitMQ) and can be consumed by any service outside the OpenStack. More technical details at https://docs.openstack.org/nova/latest/reference/notifications.html
notify_on_state_change
¶Type: | string |
---|---|
Default: | <None> |
Valid Values: | <None>, vm_state, vm_and_task_state |
If set, send compute.instance.update notifications on instance state changes.
Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications.
Possible values:
old_state
and state
fields. The old_task_state
and
new_task_state
fields will be set to the current task_state of the
instance.Group | Name |
---|---|
DEFAULT | notify_on_state_change |
default_level
¶Type: | string |
---|---|
Default: | INFO |
Valid Values: | DEBUG, INFO, WARN, ERROR, CRITICAL |
Default notification level for outgoing notifications.
Group | Name |
---|---|
DEFAULT | default_notification_level |
default_publisher_id
¶Type: | string |
---|---|
Default: | $host |
Default publisher_id for outgoing notifications. If you consider routing notifications using different publisher, change this value accordingly.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | default_publisher_id |
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | This option is only used when monkey_patch=True and monkey_patch_modules is configured to specify the legacy notify_decorator. Since the monkey_patch and monkey_patch_modules options are deprecated, this option is also deprecated. |
---|
notification_format
¶Type: | string |
---|---|
Default: | both |
Valid Values: | unversioned, versioned, both |
Specifies which notification format shall be used by nova.
The default value is fine for most deployments and rarely needs to be changed. This value can be set to 'versioned' once the infrastructure moves closer to consuming the newer format of notifications. After this occurs, this option will be removed.
Note that notifications can be completely disabled by setting driver=noop
in the [oslo_messaging_notifications]
group.
Possible values: * unversioned: Only the legacy unversioned notifications are emitted. * versioned: Only the new versioned notifications are emitted. * both: Both the legacy unversioned and the new versioned notifications are
emitted. (Default)
The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html
Group | Name |
---|---|
DEFAULT | notification_format |
versioned_notifications_topics
¶Type: | list |
---|---|
Default: | versioned_notifications |
Specifies the topics for the versioned notifications issued by nova.
The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list.
The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html
bdms_in_notifications
¶Type: | boolean |
---|---|
Default: | false |
If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database.
project_id_regex
¶Type: | string |
---|---|
Default: | <None> |
This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone.
Possible values:
Warning
This option is deprecated for removal since 13.0.0. Its value may be silently ignored in the future.
Reason: | Recent versions of nova constrain project IDs to hexadecimal characters and dashes. If your installation uses IDs outside of this range, you should use this option to provide your own regex and give you time to migrate offending projects to valid IDs before the next release. |
---|
disable_process_locking
¶Type: | boolean |
---|---|
Default: | false |
Enables or disables inter-process locks.
Group | Name |
---|---|
DEFAULT | disable_process_locking |
lock_path
¶Type: | string |
---|---|
Default: | <None> |
Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
Group | Name |
---|---|
DEFAULT | lock_path |
container_name
¶Type: | string |
---|---|
Default: | <None> |
Name for the AMQP container. must be globally unique. Defaults to a generated UUID
Group | Name |
---|---|
amqp1 | container_name |
idle_timeout
¶Type: | integer |
---|---|
Default: | 0 |
Timeout for inactive connections (in seconds)
Group | Name |
---|---|
amqp1 | idle_timeout |
ssl
¶Type: | boolean |
---|---|
Default: | false |
Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate.
ssl_ca_file
¶Type: | string |
---|---|
Default: | '' |
CA certificate PEM file used to verify the server's certificate
Group | Name |
---|---|
amqp1 | ssl_ca_file |
ssl_cert_file
¶Type: | string |
---|---|
Default: | '' |
Self-identifying certificate PEM file for client authentication
Group | Name |
---|---|
amqp1 | ssl_cert_file |
ssl_key_file
¶Type: | string |
---|---|
Default: | '' |
Private key PEM file used to sign ssl_cert_file certificate (optional)
Group | Name |
---|---|
amqp1 | ssl_key_file |
ssl_key_password
¶Type: | string |
---|---|
Default: | <None> |
Password for decrypting ssl_key_file (if encrypted)
Group | Name |
---|---|
amqp1 | ssl_key_password |
ssl_verify_vhost
¶Type: | boolean |
---|---|
Default: | false |
By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name.
allow_insecure_clients
¶Type: | boolean |
---|---|
Default: | false |
Accept clients using either SSL or plain TCP
Group | Name |
---|---|
amqp1 | allow_insecure_clients |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Not applicable - not a SSL server |
---|
sasl_mechanisms
¶Type: | string |
---|---|
Default: | '' |
Space separated list of acceptable SASL mechanisms
Group | Name |
---|---|
amqp1 | sasl_mechanisms |
sasl_config_dir
¶Type: | string |
---|---|
Default: | '' |
Path to directory that contains the SASL configuration
Group | Name |
---|---|
amqp1 | sasl_config_dir |
sasl_config_name
¶Type: | string |
---|---|
Default: | '' |
Name of configuration file (without .conf suffix)
Group | Name |
---|---|
amqp1 | sasl_config_name |
sasl_default_realm
¶Type: | string |
---|---|
Default: | '' |
SASL realm to use if no realm present in username
username
¶Type: | string |
---|---|
Default: | '' |
User name for message broker authentication
Group | Name |
---|---|
amqp1 | username |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Should use configuration option transport_url to provide the username. |
---|
password
¶Type: | string |
---|---|
Default: | '' |
Password for message broker authentication
Group | Name |
---|---|
amqp1 | password |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Should use configuration option transport_url to provide the password. |
---|
connection_retry_interval
¶Type: | integer |
---|---|
Default: | 1 |
Minimum Value: | 1 |
Seconds to pause before attempting to re-connect.
connection_retry_backoff
¶Type: | integer |
---|---|
Default: | 2 |
Minimum Value: | 0 |
Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt.
connection_retry_interval_max
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 1 |
Maximum limit for connection_retry_interval + connection_retry_backoff
link_retry_delay
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 1 |
Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.
default_reply_retry
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | -1 |
The maximum number of attempts to re-send a reply message which failed due to a recoverable error.
default_reply_timeout
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 5 |
The deadline for an rpc reply message delivery.
default_send_timeout
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 5 |
The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry.
default_notify_timeout
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 5 |
The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry.
default_sender_link_timeout
¶Type: | integer |
---|---|
Default: | 600 |
Minimum Value: | 1 |
The duration to schedule a purge of idle sender links. Detach link after expiry.
addressing_mode
¶Type: | string |
---|---|
Default: | dynamic |
Indicates the addressing mode used by the driver. Permitted values: 'legacy' - use legacy non-routable addressing 'routable' - use routable addresses 'dynamic' - use legacy addresses if the message bus does not support routing otherwise use routable addressing
pseudo_vhost
¶Type: | boolean |
---|---|
Default: | true |
Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private 'subnet' per virtual host. Set to False if the message bus supports virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative as the name of the virtual host.
server_request_prefix
¶Type: | string |
---|---|
Default: | exclusive |
address prefix used when sending to a specific server
Group | Name |
---|---|
amqp1 | server_request_prefix |
broadcast_prefix
¶Type: | string |
---|---|
Default: | broadcast |
address prefix used when broadcasting to all servers
Group | Name |
---|---|
amqp1 | broadcast_prefix |
group_request_prefix
¶Type: | string |
---|---|
Default: | unicast |
address prefix when sending to any server in group
Group | Name |
---|---|
amqp1 | group_request_prefix |
rpc_address_prefix
¶Type: | string |
---|---|
Default: | openstack.org/om/rpc |
Address prefix for all generated RPC addresses
notify_address_prefix
¶Type: | string |
---|---|
Default: | openstack.org/om/notify |
Address prefix for all generated Notification addresses
multicast_address
¶Type: | string |
---|---|
Default: | multicast |
Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages.
unicast_address
¶Type: | string |
---|---|
Default: | unicast |
Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination.
anycast_address
¶Type: | string |
---|---|
Default: | anycast |
Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers.
default_notification_exchange
¶Type: | string |
---|---|
Default: | <None> |
Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else 'notify'
default_rpc_exchange
¶Type: | string |
---|---|
Default: | <None> |
Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else 'rpc'
reply_link_credit
¶Type: | integer |
---|---|
Default: | 200 |
Minimum Value: | 1 |
Window size for incoming RPC Reply messages.
rpc_server_credit
¶Type: | integer |
---|---|
Default: | 100 |
Minimum Value: | 1 |
Window size for incoming RPC Request messages
notify_server_credit
¶Type: | integer |
---|---|
Default: | 100 |
Minimum Value: | 1 |
Window size for incoming Notification messages
pre_settled
¶Type: | multi-valued |
---|---|
Default: | rpc-cast |
Default: | rpc-reply |
Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: 'rpc-call' - send RPC Calls pre-settled 'rpc-reply'- send RPC Replies pre-settled 'rpc-cast' - Send RPC Casts pre-settled 'notify' - Send Notifications pre-settled
kafka_default_host
¶Type: | string |
---|---|
Default: | localhost |
Default Kafka broker Host
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
kafka_default_port
¶Type: | port number |
---|---|
Default: | 9092 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Default Kafka broker Port
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
kafka_max_fetch_bytes
¶Type: | integer |
---|---|
Default: | 1048576 |
Max fetch bytes of Kafka consumer
kafka_consumer_timeout
¶Type: | floating point |
---|---|
Default: | 1.0 |
Default timeout(s) for Kafka consumers
pool_size
¶Type: | integer |
---|---|
Default: | 10 |
Pool Size for Kafka Consumers
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Driver no longer uses connection pool. |
---|
conn_pool_min_size
¶Type: | integer |
---|---|
Default: | 2 |
The pool size limit for connections expiration policy
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Driver no longer uses connection pool. |
---|
conn_pool_ttl
¶Type: | integer |
---|---|
Default: | 1200 |
The time-to-live in sec of idle connections in the pool
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Driver no longer uses connection pool. |
---|
consumer_group
¶Type: | string |
---|---|
Default: | oslo_messaging_consumer |
Group id for Kafka consumer. Consumers in one group will coordinate message consumption
producer_batch_timeout
¶Type: | floating point |
---|---|
Default: | 0.0 |
Upper bound on the delay for KafkaProducer batching in seconds
producer_batch_size
¶Type: | integer |
---|---|
Default: | 16384 |
Size of batch for the producer async send
driver
¶Type: | multi-valued |
---|---|
Default: | '' |
The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
Group | Name |
---|---|
DEFAULT | notification_driver |
transport_url
¶Type: | string |
---|---|
Default: | <None> |
A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
Group | Name |
---|---|
DEFAULT | notification_transport_url |
topics
¶Type: | list |
---|---|
Default: | notifications |
AMQP topic used for OpenStack notifications.
Group | Name |
---|---|
rpc_notifier2 | topics |
DEFAULT | notification_topics |
retry
¶Type: | integer |
---|---|
Default: | -1 |
The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
amqp_durable_queues
¶Type: | boolean |
---|---|
Default: | false |
Use durable queues in AMQP.
Group | Name |
---|---|
DEFAULT | amqp_durable_queues |
DEFAULT | rabbit_durable_queues |
amqp_auto_delete
¶Type: | boolean |
---|---|
Default: | false |
Auto-delete queues in AMQP.
Group | Name |
---|---|
DEFAULT | amqp_auto_delete |
ssl
¶Type: | boolean |
---|---|
Default: | <None> |
Enable SSL
ssl_version
¶Type: | string |
---|---|
Default: | '' |
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
Group | Name |
---|---|
oslo_messaging_rabbit | kombu_ssl_version |
ssl_key_file
¶Type: | string |
---|---|
Default: | '' |
SSL key file (valid only if SSL enabled).
Group | Name |
---|---|
oslo_messaging_rabbit | kombu_ssl_keyfile |
ssl_cert_file
¶Type: | string |
---|---|
Default: | '' |
SSL cert file (valid only if SSL enabled).
Group | Name |
---|---|
oslo_messaging_rabbit | kombu_ssl_certfile |
ssl_ca_file
¶Type: | string |
---|---|
Default: | '' |
SSL certification authority file (valid only if SSL enabled).
Group | Name |
---|---|
oslo_messaging_rabbit | kombu_ssl_ca_certs |
kombu_reconnect_delay
¶Type: | floating point |
---|---|
Default: | 1.0 |
How long to wait before reconnecting in response to an AMQP consumer cancel notification.
Group | Name |
---|---|
DEFAULT | kombu_reconnect_delay |
kombu_compression
¶Type: | string |
---|---|
Default: | <None> |
EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions.
kombu_missing_consumer_retry_timeout
¶Type: | integer |
---|---|
Default: | 60 |
How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
Group | Name |
---|---|
oslo_messaging_rabbit | kombu_reconnect_timeout |
kombu_failover_strategy
¶Type: | string |
---|---|
Default: | round-robin |
Valid Values: | round-robin, shuffle |
Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
rabbit_host
¶Type: | string |
---|---|
Default: | localhost |
The RabbitMQ broker address where a single node is used.
Group | Name |
---|---|
DEFAULT | rabbit_host |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_port
¶Type: | port number |
---|---|
Default: | 5672 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
The RabbitMQ broker port where a single node is used.
Group | Name |
---|---|
DEFAULT | rabbit_port |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_hosts
¶Type: | list |
---|---|
Default: | $rabbit_host:$rabbit_port |
RabbitMQ HA cluster host:port pairs.
Group | Name |
---|---|
DEFAULT | rabbit_hosts |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_userid
¶Type: | string |
---|---|
Default: | guest |
The RabbitMQ userid.
Group | Name |
---|---|
DEFAULT | rabbit_userid |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_password
¶Type: | string |
---|---|
Default: | guest |
The RabbitMQ password.
Group | Name |
---|---|
DEFAULT | rabbit_password |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_login_method
¶Type: | string |
---|---|
Default: | AMQPLAIN |
Valid Values: | PLAIN, AMQPLAIN, RABBIT-CR-DEMO |
The RabbitMQ login method.
Group | Name |
---|---|
DEFAULT | rabbit_login_method |
rabbit_virtual_host
¶Type: | string |
---|---|
Default: | / |
The RabbitMQ virtual host.
Group | Name |
---|---|
DEFAULT | rabbit_virtual_host |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
Reason: | Replaced by [DEFAULT]/transport_url |
---|
rabbit_retry_interval
¶Type: | integer |
---|---|
Default: | 1 |
How frequently to retry connecting with RabbitMQ.
rabbit_retry_backoff
¶Type: | integer |
---|---|
Default: | 2 |
How long to backoff for between retries when connecting to RabbitMQ.
Group | Name |
---|---|
DEFAULT | rabbit_retry_backoff |
rabbit_interval_max
¶Type: | integer |
---|---|
Default: | 30 |
Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
rabbit_max_retries
¶Type: | integer |
---|---|
Default: | 0 |
Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
Group | Name |
---|---|
DEFAULT | rabbit_max_retries |
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
rabbit_ha_queues
¶Type: | boolean |
---|---|
Default: | false |
Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA '^(?!amq.).*' '{"ha-mode": "all"}' "
Group | Name |
---|---|
DEFAULT | rabbit_ha_queues |
rabbit_transient_queues_ttl
¶Type: | integer |
---|---|
Default: | 1800 |
Minimum Value: | 1 |
Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.
rabbit_qos_prefetch_count
¶Type: | integer |
---|---|
Default: | 0 |
Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
heartbeat_timeout_threshold
¶Type: | integer |
---|---|
Default: | 60 |
Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL
heartbeat_rate
¶Type: | integer |
---|---|
Default: | 2 |
How often times during the heartbeat_timeout_threshold we check the heartbeat.
fake_rabbit
¶Type: | boolean |
---|---|
Default: | false |
Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
Group | Name |
---|---|
DEFAULT | fake_rabbit |
channel_max
¶Type: | integer |
---|---|
Default: | <None> |
Maximum number of channels to allow
frame_max
¶Type: | integer |
---|---|
Default: | <None> |
The maximum byte size for an AMQP frame
heartbeat_interval
¶Type: | integer |
---|---|
Default: | 3 |
How often to send heartbeats for consumer's connections
ssl_options
¶Type: | dict |
---|---|
Default: | <None> |
Arguments passed to ssl.wrap_socket
socket_timeout
¶Type: | floating point |
---|---|
Default: | 0.25 |
Set socket timeout in seconds for connection's socket
tcp_user_timeout
¶Type: | floating point |
---|---|
Default: | 0.25 |
Set TCP_USER_TIMEOUT in seconds for connection's socket
host_connection_reconnect_delay
¶Type: | floating point |
---|---|
Default: | 0.25 |
Set delay for reconnection to some host which has connection error
connection_factory
¶Type: | string |
---|---|
Default: | single |
Valid Values: | new, single, read_write |
Connection factory implementation
pool_max_size
¶Type: | integer |
---|---|
Default: | 30 |
Maximum number of connections to keep queued.
pool_max_overflow
¶Type: | integer |
---|---|
Default: | 0 |
Maximum number of connections to create above pool_max_size.
pool_timeout
¶Type: | integer |
---|---|
Default: | 30 |
Default number of seconds to wait for a connections to available
pool_recycle
¶Type: | integer |
---|---|
Default: | 600 |
Lifetime of a connection (since creation) in seconds or None for no recycling. Expired connections are closed on acquire.
pool_stale
¶Type: | integer |
---|---|
Default: | 60 |
Threshold at which inactive (since release) connections are considered stale in seconds or None for no staleness. Stale connections are closed on acquire.
default_serializer_type
¶Type: | string |
---|---|
Default: | json |
Valid Values: | json, msgpack |
Default serialization mechanism for serializing/deserializing outgoing/incoming messages
notification_persistence
¶Type: | boolean |
---|---|
Default: | false |
Persist notification messages.
default_notification_exchange
¶Type: | string |
---|---|
Default: | ${control_exchange}_notification |
Exchange name for sending notifications
notification_listener_prefetch_count
¶Type: | integer |
---|---|
Default: | 100 |
Max number of not acknowledged message which RabbitMQ can send to notification listener.
default_notification_retry_attempts
¶Type: | integer |
---|---|
Default: | -1 |
Reconnecting retry count in case of connectivity problem during sending notification, -1 means infinite retry.
notification_retry_delay
¶Type: | floating point |
---|---|
Default: | 0.25 |
Reconnecting retry delay in case of connectivity problem during sending notification message
rpc_queue_expiration
¶Type: | integer |
---|---|
Default: | 60 |
Time to live for rpc queues without consumers in seconds.
default_rpc_exchange
¶Type: | string |
---|---|
Default: | ${control_exchange}_rpc |
Exchange name for sending RPC messages
rpc_reply_exchange
¶Type: | string |
---|---|
Default: | ${control_exchange}_rpc_reply |
Exchange name for receiving RPC replies
rpc_listener_prefetch_count
¶Type: | integer |
---|---|
Default: | 100 |
Max number of not acknowledged message which RabbitMQ can send to rpc listener.
rpc_reply_listener_prefetch_count
¶Type: | integer |
---|---|
Default: | 100 |
Max number of not acknowledged message which RabbitMQ can send to rpc reply listener.
rpc_reply_retry_attempts
¶Type: | integer |
---|---|
Default: | -1 |
Reconnecting retry count in case of connectivity problem during sending reply. -1 means infinite retry during rpc_timeout
rpc_reply_retry_delay
¶Type: | floating point |
---|---|
Default: | 0.25 |
Reconnecting retry delay in case of connectivity problem during sending reply.
default_rpc_retry_attempts
¶Type: | integer |
---|---|
Default: | -1 |
Reconnecting retry count in case of connectivity problem during sending RPC message, -1 means infinite retry. If actual retry attempts in not 0 the rpc request could be processed more than one time
rpc_retry_delay
¶Type: | floating point |
---|---|
Default: | 0.25 |
Reconnecting retry delay in case of connectivity problem during sending RPC message
rpc_zmq_bind_address
¶Type: | string |
---|---|
Default: | * |
ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The "host" option should point or resolve to this address.
Group | Name |
---|---|
DEFAULT | rpc_zmq_bind_address |
rpc_zmq_matchmaker
¶Type: | string |
---|---|
Default: | redis |
Valid Values: | redis, sentinel, dummy |
MatchMaker driver.
Group | Name |
---|---|
DEFAULT | rpc_zmq_matchmaker |
rpc_zmq_contexts
¶Type: | integer |
---|---|
Default: | 1 |
Number of ZeroMQ contexts, defaults to 1.
Group | Name |
---|---|
DEFAULT | rpc_zmq_contexts |
rpc_zmq_topic_backlog
¶Type: | integer |
---|---|
Default: | <None> |
Maximum number of ingress messages to locally buffer per topic. Default is unlimited.
Group | Name |
---|---|
DEFAULT | rpc_zmq_topic_backlog |
rpc_zmq_ipc_dir
¶Type: | string |
---|---|
Default: | /var/run/openstack |
Directory for holding IPC sockets.
Group | Name |
---|---|
DEFAULT | rpc_zmq_ipc_dir |
rpc_zmq_host
¶Type: | string |
---|---|
Default: | localhost |
Name of this node. Must be a valid hostname, FQDN, or IP address. Must match "host" option, if running Nova.
Group | Name |
---|---|
DEFAULT | rpc_zmq_host |
zmq_linger
¶Type: | integer |
---|---|
Default: | -1 |
Number of seconds to wait before all pending messages will be sent after closing a socket. The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Positive values specify an upper bound for the linger period.
Group | Name |
---|---|
DEFAULT | rpc_cast_timeout |
rpc_poll_timeout
¶Type: | integer |
---|---|
Default: | 1 |
The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
Group | Name |
---|---|
DEFAULT | rpc_poll_timeout |
zmq_target_expire
¶Type: | integer |
---|---|
Default: | 300 |
Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout).
Group | Name |
---|---|
DEFAULT | zmq_target_expire |
zmq_target_update
¶Type: | integer |
---|---|
Default: | 180 |
Update period in seconds of a name service record about existing target.
Group | Name |
---|---|
DEFAULT | zmq_target_update |
use_pub_sub
¶Type: | boolean |
---|---|
Default: | false |
Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
Group | Name |
---|---|
DEFAULT | use_pub_sub |
use_router_proxy
¶Type: | boolean |
---|---|
Default: | false |
Use ROUTER remote proxy.
Group | Name |
---|---|
DEFAULT | use_router_proxy |
use_dynamic_connections
¶Type: | boolean |
---|---|
Default: | false |
This option makes direct connections dynamic or static. It makes sense only with use_router_proxy=False which means to use direct connections for direct message types (ignored otherwise).
zmq_failover_connections
¶Type: | integer |
---|---|
Default: | 2 |
How many additional connections to a host will be made for failover reasons. This option is actual only in dynamic connections mode.
rpc_zmq_min_port
¶Type: | port number |
---|---|
Default: | 49153 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Minimal port number for random ports range.
Group | Name |
---|---|
DEFAULT | rpc_zmq_min_port |
rpc_zmq_max_port
¶Type: | integer |
---|---|
Default: | 65536 |
Minimum Value: | 1 |
Maximum Value: | 65536 |
Maximal port number for random ports range.
Group | Name |
---|---|
DEFAULT | rpc_zmq_max_port |
rpc_zmq_bind_port_retries
¶Type: | integer |
---|---|
Default: | 100 |
Number of retries to find free port number before fail with ZMQBindError.
Group | Name |
---|---|
DEFAULT | rpc_zmq_bind_port_retries |
rpc_zmq_serialization
¶Type: | string |
---|---|
Default: | json |
Valid Values: | json, msgpack |
Default serialization mechanism for serializing/deserializing outgoing/incoming messages
Group | Name |
---|---|
DEFAULT | rpc_zmq_serialization |
zmq_immediate
¶Type: | boolean |
---|---|
Default: | true |
This option configures round-robin mode in zmq socket. True means not keeping a queue when server side disconnects. False means to keep queue and messages even if server is disconnected, when the server appears we send all accumulated messages to it.
zmq_tcp_keepalive
¶Type: | integer |
---|---|
Default: | -1 |
Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any other negative value) means to skip any overrides and leave it to OS default; 0 and 1 (or any other positive value) mean to disable and enable the option respectively.
zmq_tcp_keepalive_idle
¶Type: | integer |
---|---|
Default: | -1 |
The duration between two keepalive transmissions in idle condition. The unit is platform dependent, for example, seconds in Linux, milliseconds in Windows etc. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
zmq_tcp_keepalive_cnt
¶Type: | integer |
---|---|
Default: | -1 |
The number of retransmissions to be carried out before declaring that remote end is not available. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
zmq_tcp_keepalive_intvl
¶Type: | integer |
---|---|
Default: | -1 |
The duration between two successive keepalive retransmissions, if acknowledgement to the previous keepalive transmission is not received. The unit is platform dependent, for example, seconds in Linux, milliseconds in Windows etc. The default value of -1 (or any other negative value and 0) means to skip any overrides and leave it to OS default.
rpc_thread_pool_size
¶Type: | integer |
---|---|
Default: | 100 |
Maximum number of (green) threads to work concurrently.
rpc_message_ttl
¶Type: | integer |
---|---|
Default: | 300 |
Expiration timeout in seconds of a sent/received message after which it is not tracked anymore by a client/server.
rpc_use_acks
¶Type: | boolean |
---|---|
Default: | false |
Wait for message acknowledgements from receivers. This mechanism works only via proxy without PUB/SUB.
rpc_ack_timeout_base
¶Type: | integer |
---|---|
Default: | 15 |
Number of seconds to wait for an ack from a cast/call. After each retry attempt this timeout is multiplied by some specified multiplier.
rpc_ack_timeout_multiplier
¶Type: | integer |
---|---|
Default: | 2 |
Number to multiply base ack timeout by after each retry attempt.
rpc_retry_attempts
¶Type: | integer |
---|---|
Default: | 3 |
Default number of message sending attempts in case of any problems occurred: positive value N means at most N retries, 0 means no retries, None or -1 (or any other negative values) mean to retry forever. This option is used only if acknowledgments are enabled.
subscribe_on
¶Type: | list |
---|---|
Default: | '' |
List of publisher hosts SubConsumer can subscribe on. This option has higher priority then the default publishers list taken from the matchmaker.
max_request_body_size
¶Type: | integer |
---|---|
Default: | 114688 |
The maximum body size for each request, in bytes.
Group | Name |
---|---|
DEFAULT | osapi_max_request_body_size |
DEFAULT | max_request_body_size |
secure_proxy_ssl_header
¶Type: | string |
---|---|
Default: | X-Forwarded-Proto |
The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
enable_proxy_headers_parsing
¶Type: | boolean |
---|---|
Default: | false |
Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
enforce_scope
¶Type: | boolean |
---|---|
Default: | false |
This option controls whether or not to enforce scope when evaluating policies. If True
, the scope of the token used in the request is compared to the scope_types
of the policy being enforced. If the scopes do not match, an InvalidScope
exception will be raised. If False
, a message will be logged informing operators that policies are being invoked with mismatching scope.
policy_file
¶Type: | string |
---|---|
Default: | policy.json |
The file that defines policies.
Group | Name |
---|---|
DEFAULT | policy_file |
policy_default_rule
¶Type: | string |
---|---|
Default: | default |
Default rule. Enforced when a requested rule is not found.
Group | Name |
---|---|
DEFAULT | policy_default_rule |
policy_dirs
¶Type: | multi-valued |
---|---|
Default: | policy.d |
Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
Group | Name |
---|---|
DEFAULT | policy_dirs |
remote_content_type
¶Type: | string |
---|---|
Default: | application/x-www-form-urlencoded |
Valid Values: | application/x-www-form-urlencoded, application/json |
Content Type to send and receive data for REST based policy check
remote_ssl_verify_server_crt
¶Type: | boolean |
---|---|
Default: | false |
server identity verification for REST based policy check
remote_ssl_ca_crt_file
¶Type: | string |
---|---|
Default: | <None> |
Absolute path to ca cert file for REST based policy check
remote_ssl_client_crt_file
¶Type: | string |
---|---|
Default: | <None> |
Absolute path to client cert for REST based policy check
remote_ssl_client_key_file
¶Type: | string |
---|---|
Default: | <None> |
Absolute path client key file REST based policy check
alias
¶Type: | multi-valued |
---|---|
Default: | '' |
An alias for a PCI passthrough device requirement.
This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements.
Possible Values:
A list of JSON values which describe the aliases. For example:
alias = {
"name": "QuickAssist",
"product_id": "0443",
"vendor_id": "8086",
"device_type": "type-PCI",
"numa_policy": "required"
}
This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :
name
Name of the PCI alias.
product_id
Product ID of the device in hexadecimal.
vendor_id
Vendor ID of the device in hexadecimal.
device_type
Type of PCI device. Valid values are: type-PCI
, type-PF
and
type-VF
.
numa_policy
Required NUMA affinity of device. Valid values are: legacy
,
preferred
and required
.
Group | Name |
---|---|
DEFAULT | pci_alias |
passthrough_whitelist
¶Type: | multi-valued |
---|---|
Default: | '' |
White list of PCI devices available to VMs.
Possible values:
A JSON dictionary which describe a whitelisted PCI device. It should take the following format:
["vendor_id": "<id>",] ["product_id": "<id>",] ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
"devname": "<name>",]
{"<tag>": "<tag_value>",}
Where '[' indicates zero or one occurrences, '{' indicates zero or multiple occurrences, and '|' mutually exclusive options. Note that any missing fields are automatically wildcarded.
Valid key values are :
The address key supports traditional glob style and regular expression syntax. Valid examples are:
- passthrough_whitelist = {"devname":"eth0",
"physical_network":"physnet"}
passthrough_whitelist = {"address":":0a:00."} passthrough_whitelist = {"address":":0a:00.",
"physical_network":"physnet1"}
- passthrough_whitelist = {"vendor_id":"1137",
"product_id":"0071"}
- passthrough_whitelist = {"vendor_id":"1137",
"product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"}
- passthrough_whitelist = {"address":{"domain": ".*",
"bus": "02", "slot": "01", "function": "[2-7]"},
"physical_network":"physnet1"}
- passthrough_whitelist = {"address":{"domain": ".*",
"bus": "02", "slot": "0[1-2]", "function": ".*"},
"physical_network":"physnet1"}
The following are invalid, as they specify mutually exclusive options:
- passthrough_whitelist = {"devname":"eth0",
"physical_network":"physnet", "address":":0a:00."}
A JSON list of JSON dictionaries corresponding to the above format. For example:
- passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"},
{"product_id":"0002", "vendor_id":"8086"}]
Group | Name |
---|---|
DEFAULT | pci_passthrough_whitelist |
os_region_name
¶Type: | string |
---|---|
Default: | <None> |
Region name of this node. This is used when picking the URL in the service catalog.
Possible values:
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. Use the region_name option instead. |
---|
os_interface
¶Type: | string |
---|---|
Default: | <None> |
Endpoint interface for this node. This is used when picking the URL in the service catalog.
Warning
This option is deprecated for removal since 17.0.0. Its value may be silently ignored in the future.
Reason: | Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. Use the valid_interfaces option instead. |
---|
randomize_allocation_candidates
¶Type: | boolean |
---|---|
Default: | false |
If True, when limiting allocation candidate results, the results will be a random sampling of the full result set. If False, allocation candidates are returned in a deterministic but undefined order. That is, all things being equal, two requests for allocation candidates will return the same results in the same order; but no guarantees are made as to how that order is determined.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
placement | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
default_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
username
¶Type: | unknown type |
---|---|
Default: | <None> |
Username
Group | Name |
---|---|
placement | user-name |
placement | user_name |
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
tenant_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant ID
tenant_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant Name
service_type
¶Type: | string |
---|---|
Default: | placement |
The default service_type for endpoint URL discovery.
service_name
¶Type: | string |
---|---|
Default: | <None> |
The default service_name for endpoint URL discovery.
valid_interfaces
¶Type: | list |
---|---|
Default: | internal,public |
List of interfaces, in order of preference, for endpoint URL.
Group | Name |
---|---|
placement | os_interface |
region_name
¶Type: | string |
---|---|
Default: | <None> |
The default region_name for endpoint URL discovery.
Group | Name |
---|---|
placement | os_region_name |
endpoint_override
¶Type: | string |
---|---|
Default: | <None> |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
Quota options allow to manage quotas in openstack deployment.
instances
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | -1 |
The number of instances allowed per project.
Possible Values
Group | Name |
---|---|
DEFAULT | quota_instances |
cores
¶Type: | integer |
---|---|
Default: | 20 |
Minimum Value: | -1 |
The number of instance cores or vCPUs allowed per project.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_cores |
ram
¶Type: | integer |
---|---|
Default: | 51200 |
Minimum Value: | -1 |
The number of megabytes of instance RAM allowed per project.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_ram |
floating_ips
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | -1 |
The number of floating IPs allowed per project.
Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_floating_ips |
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
fixed_ips
¶Type: | integer |
---|---|
Default: | -1 |
Minimum Value: | -1 |
The number of fixed IPs allowed per project.
Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up. This quota value should be at least the number of instances allowed
Possible values:
Group | Name |
---|---|
DEFAULT | quota_fixed_ips |
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
metadata_items
¶Type: | integer |
---|---|
Default: | 128 |
Minimum Value: | -1 |
The number of metadata items allowed per instance.
Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_metadata_items |
injected_files
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | -1 |
The number of injected files allowed.
File injection allows users to customize the personality of an instance by
injecting data into it upon boot. Only text file injection is permitted: binary
or ZIP files are not accepted. During file injection, any existing files that
match specified files are renamed to include .bak
extension appended with a
timestamp.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_injected_files |
injected_file_content_bytes
¶Type: | integer |
---|---|
Default: | 10240 |
Minimum Value: | -1 |
The number of bytes allowed per injected file.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_injected_file_content_bytes |
injected_file_path_length
¶Type: | integer |
---|---|
Default: | 255 |
Minimum Value: | -1 |
The maximum allowed injected file path length.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_injected_file_path_length |
security_groups
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | -1 |
The number of security groups per project.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_security_groups |
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
security_group_rules
¶Type: | integer |
---|---|
Default: | 20 |
Minimum Value: | -1 |
The number of security rules per security group.
The associated rules in each security group control the traffic to instances in the group.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_security_group_rules |
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | nova-network is deprecated, as are any related configuration options. |
---|
key_pairs
¶Type: | integer |
---|---|
Default: | 100 |
Minimum Value: | -1 |
The maximum number of key pairs allowed per user.
Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_key_pairs |
server_groups
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | -1 |
The maxiumum number of server groups per project.
Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_server_groups |
server_group_members
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | -1 |
The maximum number of servers per server group.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_server_group_members |
reservation_expire
¶Type: | integer |
---|---|
Default: | 86400 |
The number of seconds until a reservation expires.
This quota represents the time period for invalidating quota reservations.
Group | Name |
---|---|
DEFAULT | reservation_expire |
until_refresh
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
The count of reservations until usage is refreshed.
This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues.
Group | Name |
---|---|
DEFAULT | until_refresh |
max_age
¶Type: | integer |
---|---|
Default: | 0 |
Minimum Value: | 0 |
The number of seconds between subsequent usage refreshes.
This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Note that quotas are not updated on a periodic task, they will update on a new reservation if max_age has passed since the last reservation.
Group | Name |
---|---|
DEFAULT | max_age |
driver
¶Type: | string |
---|---|
Default: | nova.quota.DbQuotaDriver |
The quota enforcer driver.
Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks.
Possible values:
Group | Name |
---|---|
DEFAULT | quota_driver |
Warning
This option is deprecated for removal since 14.0.0. Its value may be silently ignored in the future.
recheck_quota
¶Type: | boolean |
---|---|
Default: | true |
Recheck quota after resource creation to prevent allowing quota to be exceeded.
This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them.
The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request.
Options under this group enable and configure Remote Desktop Protocol ( RDP) related features. This group is only relevant to Hyper-V users.
enabled
¶Type: | boolean |
---|---|
Default: | false |
Enable Remote Desktop Protocol (RDP) related features.
Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V.
Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform.
Related options:
compute_driver
: Must be hyperv.html5_proxy_base_url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:6083/ |
The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance.
An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack.
An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect
Possible values:
<scheme>://<ip-address>:<port-number>/
The scheme must be identical to the scheme configured for the RDP HTML5
console proxy service. It is http
or https
.
The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening.
The port must be identical to the port on which the RDP HTML5 console proxy service is listening.
Related options:
rdp.enabled
: Must be set to True
for html5_proxy_base_url
to be
effective.host
¶Type: | host address |
---|---|
Default: | <None> |
Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.
Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.
Possible Values:
- IP address of a remote host as a command line parameter to a nova service. For Example:
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address where the debugger is running>
port
¶Type: | port number |
---|---|
Default: | <None> |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.
Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.
Possible Values:
- Port number you want to use as a command line parameter to a nova service. For Example:
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address where the debugger is running> --remote_debug-port <port> it's listening on>.
host_manager
¶Type: | string |
---|---|
Default: | host_manager |
Valid Values: | host_manager, ironic_host_manager |
The scheduler host manager to use.
The host manager manages the in-memory picture of the hosts that the scheduler uses. The options values are chosen from the entry points under the namespace 'nova.scheduler.host_manager' in 'setup.cfg'.
NOTE: The "ironic_host_manager" option is deprecated as of the 17.0.0 Queens release.
Group | Name |
---|---|
DEFAULT | scheduler_host_manager |
driver
¶Type: | string |
---|---|
Default: | filter_scheduler |
The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace 'nova.scheduler.driver' of file 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is used.
Other options are:
Possible values:
** filter_scheduler ** caching_scheduler ** chance_scheduler ** fake_scheduler * You may also set this to the entry point name of a custom scheduler driver,
but you will be responsible for creating and maintaining it in your setup.cfg file.
Group | Name |
---|---|
DEFAULT | scheduler_driver |
periodic_task_interval
¶Type: | integer |
---|---|
Default: | 60 |
Periodic task interval.
This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used.
If this is larger than the nova-service 'service_down_time' setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler.
Possible values:
Related options:
nova-service service_down_time
Group | Name |
---|---|
DEFAULT | scheduler_driver_task_period |
max_attempts
¶Type: | integer |
---|---|
Default: | 3 |
Minimum Value: | 1 |
This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state.
Possible values:
Group | Name |
---|---|
DEFAULT | scheduler_max_attempts |
discover_hosts_in_cells_interval
¶Type: | integer |
---|---|
Default: | -1 |
Minimum Value: | -1 |
Periodic task interval.
This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur.
Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run.
max_placement_results
¶Type: | integer |
---|---|
Default: | 1000 |
Minimum Value: | 1 |
This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates.
A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler.
This option is only used by the FilterScheduler; if you use a different scheduler, this option has no effect.
The serial console feature allows you to connect to a guest in case a graphical console like VNC, RDP or SPICE is not available. This is only currently supported for the libvirt, Ironic and hyper-v drivers.
enabled
¶Type: | boolean |
---|---|
Default: | false |
Enable the serial console feature.
In order to use this feature, the service nova-serialproxy
needs to run.
This service is typically executed on the controller node.
port_range
¶Type: | string |
---|---|
Default: | 10000:20000 |
A range of TCP ports a guest can use for its backend.
Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won't get launched.
Possible values:
\d+:\d+
For example 10000:20000
.
Be sure that the first port number is lower than the second port number
and that both are in range from 0 to 65535.base_url
¶Type: | URI |
---|---|
Default: | ws://127.0.0.1:6083/ |
The URL an end user would use to connect to the nova-serialproxy
service.
The nova-serialproxy
service is called with this token enriched URL
and establishes the connection to the proper instance.
Related options:
nova-serialproxy
service is listening (see option serialproxy_host
in this section).serialproxy_port
of this
section.wss://
instead of the unsecured ws://
. The options cert
and key
in the [DEFAULT]
section have to be set for that.proxyclient_address
¶Type: | string |
---|---|
Default: | 127.0.0.1 |
The IP address to which proxy clients (like nova-serialproxy
) should
connect to get the serial console of an instance.
This is typically the IP address of the host of a nova-compute
service.
serialproxy_host
¶Type: | string |
---|---|
Default: | 0.0.0.0 |
The IP address which is used by the nova-serialproxy
service to listen
for incoming requests.
The nova-serialproxy
service listens on this IP address for incoming
connection requests to instances which expose serial console.
Related options:
base_url
of this section or use 0.0.0.0
to listen on all addresses.serialproxy_port
¶Type: | port number |
---|---|
Default: | 6083 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
The port number which is used by the nova-serialproxy
service to listen
for incoming requests.
The nova-serialproxy
service listens on this port number for incoming
connection requests to instances which expose serial console.
Related options:
base_url
of this section.Configuration options for service to service authentication using a service token. These options allow sending a service token along with the user's token when contacting external REST APIs.
send_service_user_token
¶Type: | boolean |
---|---|
Default: | false |
When True, if sending a user token to a REST API, also send a service token.
Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
service_user | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
default_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
username
¶Type: | unknown type |
---|---|
Default: | <None> |
Username
Group | Name |
---|---|
service_user | user-name |
service_user | user_name |
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
tenant_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant ID
tenant_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant Name
SPICE console feature allows you to connect to a guest virtual machine. SPICE is a replacement for fairly limited VNC protocol. Following requirements must be met in order to use SPICE: * Virtualization driver must be libvirt * spice.enabled set to True * vnc.enabled set to False * update html5proxy_base_url * update server_proxyclient_address
enabled
¶Type: | boolean |
---|---|
Default: | false |
Enable SPICE related features.
Related options:
agent_enabled
¶Type: | boolean |
---|---|
Default: | true |
Enable the SPICE guest agent support on the instances.
The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled:
html5proxy_base_url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:6082/spice_auto.html |
Location of the SPICE HTML5 console proxy.
End user would use this URL to connect to the nova-spicehtml5proxy` service. This service will forward request to the console of an instance.
In order to use SPICE console, the service nova-spicehtml5proxy
should be
running. This service is typically launched on the controller node.
Possible values:
http://host:port/spice_auto.html
where host is the node running nova-spicehtml5proxy
and the port is
typically 6082. Consider not using default value as it is not well defined
for any real deployment.Related options:
html5proxy_host
and html5proxy_port
options.
The access URL returned by the compute node must have the host
and port where the nova-spicehtml5proxy
service is listening.server_listen
¶Type: | string |
---|---|
Default: | 127.0.0.1 |
The address where the SPICE server running on the instances should listen.
Typically, the nova-spicehtml5proxy
proxy client runs on the controller
node and connects over the private network to this address on the compute
node(s).
Possible values:
server_proxyclient_address
¶Type: | string |
---|---|
Default: | 127.0.0.1 |
The address used by nova-spicehtml5proxy
client to connect to instance
console.
Typically, the nova-spicehtml5proxy
proxy client runs on the
controller node and connects over the private network to this address on the
compute node(s).
Possible values:
Related options:
server_listen
option.
The proxy client must be able to access the address specified in
server_listen
using the value of this option.keymap
¶Type: | string |
---|---|
Default: | en-us |
A keyboard layout which is supported by the underlying hypervisor on this node.
Possible values: * This is usually an 'IETF language tag' (default is 'en-us'). If you
use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.
html5proxy_host
¶Type: | host address |
---|---|
Default: | 0.0.0.0 |
IP address or a hostname on which the nova-spicehtml5proxy
service
listens for incoming requests.
Related options:
html5proxy_base_url
option.
The nova-spicehtml5proxy
service must be listening on a host that is
accessible from the HTML5 client.html5proxy_port
¶Type: | port number |
---|---|
Default: | 6082 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port on which the nova-spicehtml5proxy
service listens for incoming
requests.
Related options:
html5proxy_base_url
option.
The nova-spicehtml5proxy
service must be listening on a port that is
accessible from the HTML5 client.upgrade_levels options are used to set version cap for RPC messages sent between different nova services. By default all services send messages using the latest version they know about. The compute upgrade level is an important part of rolling upgrades where old and new nova-compute services run side by side. The other options can largely be ignored, and are only kept to help with a possible future backport issue.
compute
¶Type: | string |
---|---|
Default: | <None> |
Compute RPC API version cap.
By default, we always send messages using the most recent version the client knows about.
Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can't understand. Note that we only support upgrading from release N to release N+1.
Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment.
Possible values:
cells
¶Type: | string |
---|---|
Default: | <None> |
Cells RPC API version cap
intercell
¶Type: | string |
---|---|
Default: | <None> |
Intercell RPC API version cap
cert
¶Type: | string |
---|---|
Default: | <None> |
Cert RPC API version cap
scheduler
¶Type: | string |
---|---|
Default: | <None> |
Scheduler RPC API version cap
conductor
¶Type: | string |
---|---|
Default: | <None> |
Conductor RPC API version cap
console
¶Type: | string |
---|---|
Default: | <None> |
Console RPC API version cap
consoleauth
¶Type: | string |
---|---|
Default: | <None> |
Consoleauth RPC API version cap
network
¶Type: | string |
---|---|
Default: | <None> |
Network RPC API version cap
baseapi
¶Type: | string |
---|---|
Default: | <None> |
Base API RPC API version cap
root_token_id
¶Type: | string |
---|---|
Default: | <None> |
root token for vault
vault_url
¶Type: | string |
---|---|
Default: | http://127.0.0.1:8200 |
Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200"
ssl_ca_crt_file
¶Type: | string |
---|---|
Default: | <None> |
Absolute path to ca cert file
use_ssl
¶Type: | boolean |
---|---|
Default: | false |
SSL Enabled/Disabled
Options within this group control the authentication of the vendordata subsystem of the metadata API server (and config drive) with external systems.
cafile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate cert file
keyfile
¶Type: | string |
---|---|
Default: | <None> |
PEM encoded client certificate key file
insecure
¶Type: | boolean |
---|---|
Default: | false |
Verify HTTPS connections.
timeout
¶Type: | integer |
---|---|
Default: | <None> |
Timeout value for http requests
auth_type
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication type to load
Group | Name |
---|---|
vendordata_dynamic_auth | auth_plugin |
auth_section
¶Type: | unknown type |
---|---|
Default: | <None> |
Config Section from which to load plugin specific options
auth_url
¶Type: | unknown type |
---|---|
Default: | <None> |
Authentication URL
system_scope
¶Type: | unknown type |
---|---|
Default: | <None> |
Scope for system operations
domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID to scope to
domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name to scope to
project_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Project ID to scope to
project_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Project name to scope to
project_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain ID containing project
project_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Domain name containing project
trust_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Trust ID
default_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
user_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User ID
username
¶Type: | unknown type |
---|---|
Default: | <None> |
Username
Group | Name |
---|---|
vendordata_dynamic_auth | user-name |
vendordata_dynamic_auth | user_name |
user_domain_id
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain id
user_domain_name
¶Type: | unknown type |
---|---|
Default: | <None> |
User's domain name
password
¶Type: | unknown type |
---|---|
Default: | <None> |
User's password
tenant_id
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant ID
tenant_name
¶Type: | unknown type |
---|---|
Default: | <None> |
Tenant Name
Related options: Following options must be set in order to launch VMware-based virtual machines. * compute_driver: Must use vmwareapi.VMwareVCDriver. * vmware.host_username * vmware.host_password * vmware.cluster_name
vlan_interface
¶Type: | string |
---|---|
Default: | vmnic0 |
This option specifies the physical ethernet adapter name for VLAN networking.
Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.
Possible values:
integration_bridge
¶Type: | string |
---|---|
Default: | <None> |
This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set.
Possible values:
console_delay_seconds
¶Type: | integer |
---|---|
Default: | <None> |
Minimum Value: | 0 |
Set this value if affected by an increased network latency causing repeated characters when typing in a remote console.
serial_port_service_uri
¶Type: | string |
---|---|
Default: | <None> |
Identifies the remote system where the serial port traffic will be sent.
This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs.
Possible values:
serial_port_proxy_uri
¶Type: | URI |
---|---|
Default: | <None> |
Identifies a proxy service that provides network access to the serial_port_service_uri.
Possible values:
Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri
serial_log_dir
¶Type: | string |
---|---|
Default: | /opt/vmware/vspc |
Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the 'serial_log_dir' config value of VSPC.
host_ip
¶Type: | host address |
---|---|
Default: | <None> |
Hostname or IP address for connection to VMware vCenter host.
host_port
¶Type: | port number |
---|---|
Default: | 443 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port for connection to VMware vCenter host.
host_username
¶Type: | string |
---|---|
Default: | <None> |
Username for connection to VMware vCenter host.
host_password
¶Type: | string |
---|---|
Default: | <None> |
Password for connection to VMware vCenter host.
ca_file
¶Type: | string |
---|---|
Default: | <None> |
Specifies the CA bundle file to be used in verifying the vCenter server certificate.
insecure
¶Type: | boolean |
---|---|
Default: | false |
If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification.
Related options: * ca_file: This option is ignored if "ca_file" is set.
cluster_name
¶Type: | string |
---|---|
Default: | <None> |
Name of a VMware Cluster ComputeResource.
datastore_regex
¶Type: | string |
---|---|
Default: | <None> |
Regular expression pattern to match the name of datastore.
The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas".
NOTE: If no regex is given, it just picks the datastore with the most freespace.
Possible values:
task_poll_interval
¶Type: | floating point |
---|---|
Default: | 0.5 |
Time interval in seconds to poll remote tasks invoked on VMware VC server.
api_retry_count
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc.
vnc_port
¶Type: | port number |
---|---|
Default: | 5900 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
This option specifies VNC starting port.
Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option 'vnc_port' helps you to set default starting port for the VNC client.
Possible values:
Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total
vnc_port_total
¶Type: | integer |
---|---|
Default: | 10000 |
Minimum Value: | 0 |
Total number of VNC ports.
use_linked_clone
¶Type: | boolean |
---|---|
Default: | true |
This option enables/disables the use of linked clone.
The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service.
If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM.
connection_pool_size
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 10 |
This option sets the http connection pool size
The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice.
pbm_enabled
¶Type: | boolean |
---|---|
Default: | false |
This option enables or disables storage policy based placement of instances.
Related options:
pbm_wsdl_location
¶Type: | string |
---|---|
Default: | <None> |
This option specifies the PBM service WSDL file location URL.
Setting this will disable storage policy based placement of instances.
Possible values:
pbm_default_policy
¶Type: | string |
---|---|
Default: | <None> |
This option specifies the default policy to be used.
If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used.
Possible values:
Related options:
maximum_objects
¶Type: | integer |
---|---|
Default: | 100 |
Minimum Value: | 0 |
This option specifies the limit on the maximum number of objects to return in a single result.
A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests.
cache_prefix
¶Type: | string |
---|---|
Default: | <None> |
This option adds a prefix to the folder where cached images are stored
This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes.
Note: This should only be used when the compute nodes are running on same host or they have a shared file system.
Possible values:
Virtual Network Computer (VNC) can be used to provide remote desktop console access to instances for tenants and/or administrators.
enabled
¶Type: | boolean |
---|---|
Default: | true |
Enable VNC related features.
Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest.
Group | Name |
---|---|
DEFAULT | vnc_enabled |
keymap
¶Type: | string |
---|---|
Default: | en-us |
Keymap for VNC.
The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.
Possible values:
/usr/share/qemu/keymaps
.Group | Name |
---|---|
DEFAULT | vnc_keymap |
server_listen
¶Type: | host address |
---|---|
Default: | 127.0.0.1 |
The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node.
Group | Name |
---|---|
DEFAULT | vncserver_listen |
vnc | vncserver_listen |
server_proxyclient_address
¶Type: | host address |
---|---|
Default: | 127.0.0.1 |
Private, internal IP address or hostname of VNC console proxy.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.
This option sets the private address to which proxy clients, such as
nova-xvpvncproxy
, should connect to.
Group | Name |
---|---|
DEFAULT | vncserver_proxyclient_address |
vnc | vncserver_proxyclient_address |
novncproxy_base_url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:6080/vnc_auto.html |
Public address of noVNC VNC console proxy.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.
Related options:
Group | Name |
---|---|
DEFAULT | novncproxy_base_url |
xvpvncproxy_host
¶Type: | host address |
---|---|
Default: | 0.0.0.0 |
IP address or hostname that the XVP VNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.
This option sets the private address to which the XVP VNC console proxy service should bind to.
Related options:
Group | Name |
---|---|
DEFAULT | xvpvncproxy_host |
xvpvncproxy_port
¶Type: | port number |
---|---|
Default: | 6081 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port that the XVP VNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.
This option sets the private port to which the XVP VNC console proxy service should bind to.
Related options:
Group | Name |
---|---|
DEFAULT | xvpvncproxy_port |
xvpvncproxy_base_url
¶Type: | URI |
---|---|
Default: | http://127.0.0.1:6081/console |
Public URL address of XVP VNC console proxy.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.
This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions.
Related options:
Group | Name |
---|---|
DEFAULT | xvpvncproxy_base_url |
novncproxy_host
¶Type: | string |
---|---|
Default: | 0.0.0.0 |
IP address that the noVNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the private address to which the noVNC console proxy service should bind to.
Related options:
Group | Name |
---|---|
DEFAULT | novncproxy_host |
novncproxy_port
¶Type: | port number |
---|---|
Default: | 6080 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
Port that the noVNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the private port to which the noVNC console proxy service should bind to.
Related options:
Group | Name |
---|---|
DEFAULT | novncproxy_port |
auth_schemes
¶Type: | list |
---|---|
Default: | none |
The authentication schemes to use with the compute node.
Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first.
Possible values:
none
: allow connection without authenticationvencrypt
: use VeNCrypt authentication schemeRelated options:
[vnc]vencrypt_client_key
, [vnc]vencrypt_client_cert
: must also be setvencrypt_client_key
¶Type: | string |
---|---|
Default: | <None> |
The path to the client certificate PEM file (for x509)
The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication.
Related options:
vnc.auth_schemes
: must include vencrypt
vnc.vencrypt_client_cert
: must also be setvencrypt_client_cert
¶Type: | string |
---|---|
Default: | <None> |
The path to the client key file (for x509)
The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication.
Realted options:
vnc.auth_schemes
: must include vencrypt
vnc.vencrypt_client_key
: must also be setvencrypt_ca_certs
¶Type: | string |
---|---|
Default: | <None> |
The path to the CA certificate PEM file
The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server.
Related options:
vnc.auth_schemes
: must include vencrypt
A collection of workarounds used to mitigate bugs or issues found in system tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These should only be enabled in exceptional circumstances. All options are linked against bug IDs, where more information on the issue can be found.
disable_rootwrap
¶Type: | boolean |
---|---|
Default: | false |
Use sudo instead of rootwrap.
Allow fallback to sudo for performance reasons.
For more information, refer to the bug report:
Possible values:
Interdependencies to other options:
disable_libvirt_livesnapshot
¶Type: | boolean |
---|---|
Default: | false |
Disable live snapshots when using the libvirt driver.
Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem.
When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process.
For more information, refer to the bug report:
Possible values:
handle_virt_lifecycle_events
¶Type: | boolean |
---|---|
Default: | true |
Enable handling of events emitted from compute drivers.
Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored.
This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature.
Care should be taken when this feature is disabled and 'sync_power_state_interval' is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
For more information, refer to the bug report:
Interdependencies to other options:
sync_power_state_interval
is negative and this feature is disabled,
then instances that get out of sync between the hypervisor and the Nova
database will have to be synchronized manually.disable_group_policy_check_upcall
¶Type: | boolean |
---|---|
Default: | false |
Disable the server group policy check upcall in compute.
In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy.
Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute.
Related options:
ensure_libvirt_rbd_instance_dir_cleanup
¶Type: | boolean |
---|---|
Default: | false |
Ensure the instance directory is removed during clean up when using rbd.
When enabled this workaround will ensure that the instance directory is always
removed during cleanup on hosts using [libvirt]/images_type=rbd
. This
avoids the following bugs with evacuation and revert resize clean up that lead
to the instance directory remaining on the host:
https://bugs.launchpad.net/nova/+bug/1414895
https://bugs.launchpad.net/nova/+bug/1761062
Both of these bugs can then result in DestinationDiskExists
errors being
raised if the instances ever attempt to return to the host.
Warning
Operators will need to ensure that the instance directory itself,
specified by [DEFAULT]/instances_path
, is not shared between computes
before enabling this workaround otherwise the console.log, kernels, ramdisks
and any additional files being used by the running instance will be lost.
Related options:
compute_driver
(libvirt)[libvirt]/images_type
(rbd)instances_path
enable_numa_live_migration
¶Type: | boolean |
---|---|
Default: | false |
Enable live migration of instances with NUMA topologies.
Live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in bug #1289064. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes.
Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances.
Related options:
compute_driver
: Only the libvirt driver is affected.Options under this group are used to configure WSGI (Web Server Gateway Interface). WSGI is used to serve API requests.
api_paste_config
¶Type: | string |
---|---|
Default: | api-paste.ini |
This option represents a file name for the paste.deploy config for nova-api.
Possible values:
Group | Name |
---|---|
DEFAULT | api_paste_config |
wsgi_log_format
¶Type: | string |
---|---|
Default: | %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f |
It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect.
Possible values:
Group | Name |
---|---|
DEFAULT | wsgi_log_format |
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
Reason: | This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi. |
---|
secure_proxy_ssl_header
¶Type: | string |
---|---|
Default: | <None> |
This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.
Possible values:
WARNING: Do not set this unless you know what you are doing.
Make sure ALL of the following are true before setting this (assuming the values from the example above): * Your API is behind a proxy. * Your proxy strips the X-Forwarded-Proto header from all incoming requests.
In other words, if end users include that header in their requests, the proxy will discard it.
If any of those are not true, you should keep this setting set to None.
Group | Name |
---|---|
DEFAULT | secure_proxy_ssl_header |
ssl_ca_file
¶Type: | string |
---|---|
Default: | <None> |
This option allows setting path to the CA certificate file that should be used to verify connecting clients.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | ssl_ca_file |
ssl_cert_file
¶Type: | string |
---|---|
Default: | <None> |
This option allows setting path to the SSL certificate of API server.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | ssl_cert_file |
ssl_key_file
¶Type: | string |
---|---|
Default: | <None> |
This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | ssl_key_file |
tcp_keepidle
¶Type: | integer |
---|---|
Default: | 600 |
Minimum Value: | 0 |
This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.
Related options:
Group | Name |
---|---|
DEFAULT | tcp_keepidle |
default_pool_size
¶Type: | integer |
---|---|
Default: | 1000 |
Minimum Value: | 0 |
This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.
Group | Name |
---|---|
DEFAULT | wsgi_default_pool_size |
max_header_line
¶Type: | integer |
---|---|
Default: | 16384 |
Minimum Value: | 0 |
This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length.
Group | Name |
---|---|
DEFAULT | max_header_line |
keep_alive
¶Type: | boolean |
---|---|
Default: | true |
This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.
Possible values:
Related options:
Group | Name |
---|---|
DEFAULT | wsgi_keep_alive |
client_socket_timeout
¶Type: | integer |
---|---|
Default: | 900 |
Minimum Value: | 0 |
This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.
Group | Name |
---|---|
DEFAULT | client_socket_timeout |
XenServer options are used when the compute_driver is set to use XenServer (compute_driver=xenapi.XenAPIDriver). Must specify connection_url, connection_password and ovs_integration_bridge to use compute_driver=xenapi.XenAPIDriver.
agent_timeout
¶Type: | integer |
---|---|
Default: | 30 |
Minimum Value: | 0 |
Number of seconds to wait for agent's reply to a request.
Nova configures/performs certain administrative actions on a server with the help of an agent that's installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: 'version',' key_init', 'password','resetnetwork','inject_file', and 'agentupdate'.
To perform one of the above operations, the xapi 'agent' plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi 'agent' plugin to determine the success/failure of the operation.
This config option determines how long the xapi 'agent' plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out.
Related options:
agent_version_timeout
agent_resetnetwork_timeout
agent_version_timeout
¶Type: | integer |
---|---|
Default: | 300 |
Minimum Value: | 0 |
Number of seconds to wait for agent't reply to version request.
This indicates the amount of time xapi 'agent' plugin waits for the agent to
respond to the 'version' request specifically. The generic timeout for agent
communication agent_timeout
is ignored in this case.
During the build process the 'version' request is used to determine if the agent is available/operational to perform other requests such as 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the 'version' call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational.
agent_resetnetwork_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Number of seconds to wait for agent's reply to resetnetwork request.
This indicates the amount of time xapi 'agent' plugin waits for the agent to
respond to the 'resetnetwork' request specifically. The generic timeout for
agent communication agent_timeout
is ignored in this case.
agent_path
¶Type: | string |
---|---|
Default: | usr/sbin/xe-update-networking |
Path to locate guest agent on the server.
Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image.
Related options:
For this option to have an effect:
* flat_injected
should be set to True
* compute_driver
should be set to xenapi.XenAPIDriver
disable_agent
¶Type: | boolean |
---|---|
Default: | false |
Disables the use of XenAPI agent.
This configuration option suggests whether the use of agent should be enabled
or not regardless of what image properties are present. Image properties have
an effect only when this is set to True
. Read description of config option
use_agent_default
for more information.
Related options:
use_agent_default
use_agent_default
¶Type: | boolean |
---|---|
Default: | false |
Whether or not to use the agent by default when its usage is enabled but not indicated by the image.
The use of XenAPI agent can be disabled altogether using the configuration
option disable_agent
. However, if it is not disabled, the use of an agent
can still be controlled by the image in use through one of its properties,
xenapi_use_agent
. If this property is either not present or specified
incorrectly on the image, the use of agent is determined by this configuration
option.
Note that if this configuration is set to True
when the agent is not
present, the boot times will increase significantly.
Related options:
disable_agent
login_timeout
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Timeout in seconds for XenAPI login.
connection_concurrent
¶Type: | integer |
---|---|
Default: | 5 |
Minimum Value: | 1 |
Maximum number of concurrent XenAPI connections.
In nova, multiple XenAPI requests can happen at a time. Configuring this option will parallelize access to the XenAPI session, which allows you to make concurrent XenAPI connections.
cache_images
¶Type: | string |
---|---|
Default: | all |
Valid Values: | all, some, none |
Cache glance images locally.
The value for this option must be chosen from the choices listed here. Configuring a value other than these will default to 'all'.
Note: There is nothing that deletes these images.
Possible values:
image_compression_level
¶Type: | integer |
---|---|
Default: | <None> |
Minimum Value: | 1 |
Maximum Value: | 9 |
Compression level for images.
By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used.
Possible values:
default_os_type
¶Type: | string |
---|---|
Default: | linux |
Default OS type used when uploading an image to glance
block_device_creation_timeout
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 1 |
Time in secs to wait for a block device to be created
max_kernel_ramdisk_size
¶Type: | integer |
---|---|
Default: | 16777216 |
Maximum size in bytes of kernel or ramdisk images.
Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest.
sr_matching_filter
¶Type: | string |
---|---|
Default: | default-sr:true |
Filter for finding the SR to be used to install guest instances on.
Possible values:
sparse_copy
¶Type: | boolean |
---|---|
Default: | true |
Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won't have to be rsynced.
num_vbd_unplug_retries
¶Type: | integer |
---|---|
Default: | 10 |
Minimum Value: | 0 |
Maximum number of retries to unplug VBD. If set to 0, should try once, no retries.
ipxe_network_name
¶Type: | string |
---|---|
Default: | <None> |
Name of network to use for booting iPXE ISOs.
An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.
By default this option is not set. Enable this option to boot an iPXE ISO.
Related Options:
Type: | string |
---|---|
Default: | <None> |
URL to the iPXE boot menu.
An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.
By default this option is not set. Enable this option to boot an iPXE ISO.
Related Options:
ipxe_mkisofs_cmd
¶Type: | string |
---|---|
Default: | mkisofs |
Name and optionally path of the tool used for ISO image creation.
An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.
Note: By default mkisofs is not present in the Dom0, so the package can either be manually added to Dom0 or include the mkisofs binary in the image itself.
Related Options:
connection_url
¶Type: | string |
---|---|
Default: | <None> |
URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket.
Possible values:
connection_username
¶Type: | string |
---|---|
Default: | root |
Username for connection to XenServer/Xen Cloud Platform
connection_password
¶Type: | string |
---|---|
Default: | <None> |
Password for connection to XenServer/Xen Cloud Platform
vhd_coalesce_poll_interval
¶Type: | floating point |
---|---|
Default: | 5.0 |
Minimum Value: | 0 |
The interval used for polling of coalescing vhds.
This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts.
Related options:
check_host
¶Type: | boolean |
---|---|
Default: | true |
Ensure compute service is running on host XenAPI connects to. This option must be set to false if the 'independent_compute' option is set to true.
Possible values:
Related options:
vhd_coalesce_max_attempts
¶Type: | integer |
---|---|
Default: | 20 |
Minimum Value: | 0 |
Max number of times to poll for VHD to coalesce.
This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up.
Related opitons:
sr_base_path
¶Type: | string |
---|---|
Default: | /var/run/sr-mount |
Base path to the storage repository on the XenServer host.
target_host
¶Type: | host address |
---|---|
Default: | <None> |
The iSCSI Target Host.
This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken.
Possible values:
target_port
¶Type: | port number |
---|---|
Default: | 3260 |
Minimum Value: | 0 |
Maximum Value: | 65535 |
The iSCSI Target Port.
This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken.
independent_compute
¶Type: | boolean |
---|---|
Default: | false |
Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host.
Related options:
CONF.flat_injected
(Must be False)CONF.xenserver.check_host
(Must be False)CONF.default_ephemeral_format
(Must be unset or 'ext3')running_timeout
¶Type: | integer |
---|---|
Default: | 60 |
Minimum Value: | 0 |
Wait time for instances to go to running state.
Provide an integer value representing time in seconds to set the wait time for an instance to go to running state.
When a request to create an instance is received by nova-api and communicated to nova-compute, the creation of the instance occurs through interaction with Xen via XenAPI in the compute node. Once the node on which the instance(s) are to be launched is decided by nova-schedule and the launch is triggered, a certain amount of wait time is involved until the instance(s) can become available and 'running'. This wait time is defined by running_timeout. If the instances do not go to running state within this specified wait time, the launch expires and the instance(s) are set to 'error' state.
vif_driver
¶Type: | string |
---|---|
Default: | nova.virt.xenapi.vif.XenAPIOpenVswitchDriver |
The XenAPI VIF driver using XenServer Network APIs.
Provide a string value representing the VIF XenAPI vif driver to use for plugging virtual network interfaces.
Xen configuration uses bridging within the backend domain to allow all VMs to appear on the network as individual hosts. Bridge interfaces are used to create a XenServer VLAN network in which the VIFs for the VM instances are plugged. If no VIF bridge driver is plugged, the bridge is not made available. This configuration option takes in a value for the VIF driver.
Possible values:
Related options:
vlan_interface
ovs_integration_bridge
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
Reason: | There are only two in-tree vif drivers for XenServer. XenAPIBridgeDriver is for nova-network which is deprecated and XenAPIOpenVswitchDriver is for Neutron which is the default configuration for Nova since the 15.0.0 Ocata release. In the future the "use_neutron" configuration option will be used to determine which vif driver to use. |
---|
image_upload_handler
¶Type: | string |
---|---|
Default: | nova.virt.xenapi.image.glance.GlanceStore |
Dom0 plugin driver used to handle image uploads.
Provide a string value representing a plugin driver required to handle the image uploading to GlanceStore.
Images, and snapshots from XenServer need to be uploaded to the data store for use. image_upload_handler takes in a value for the Dom0 plugin driver. This driver is then called to uplaod images to the GlanceStore.
introduce_vdi_retry_wait
¶Type: | integer |
---|---|
Default: | 20 |
Minimum Value: | 0 |
Number of seconds to wait for SR to settle if the VDI does not exist when first introduced.
Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception.
ovs_integration_bridge
¶Type: | string |
---|---|
Default: | <None> |
The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch.
Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI.
Possible values:
use_join_force
¶Type: | boolean |
---|---|
Default: | true |
When adding new host to a pool, this will append a --force flag to the command, forcing hosts to join a pool, even if they have different CPUs.
Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced.
console_public_hostname
¶Type: | string |
---|---|
Default: | <current_hostname> |
Publicly visible name for this console host.
Possible values:
Group | Name |
---|---|
DEFAULT | console_public_hostname |
Configuration options for XVP. xvp (Xen VNC Proxy) is a proxy server providing password-protected VNC-based access to the consoles of virtual machines hosted on Citrix XenServer.
console_xvp_conf_template
¶Type: | string |
---|---|
Default: | $pybasedir/nova/console/xvp.conf.template |
XVP conf template
Group | Name |
---|---|
DEFAULT | console_xvp_conf_template |
console_xvp_conf
¶Type: | string |
---|---|
Default: | /etc/xvp.conf |
Generated XVP conf file
Group | Name |
---|---|
DEFAULT | console_xvp_conf |
console_xvp_pid
¶Type: | string |
---|---|
Default: | /var/run/xvp.pid |
XVP master process pid file
Group | Name |
---|---|
DEFAULT | console_xvp_pid |
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.