Dell EMC VxFlex OS (formerly named Dell EMC ScaleIO) is a software-only solution that uses existing servers local disks and LAN to create a virtual SAN that has all of the benefits of external storage, but at a fraction of the cost and complexity. Using the driver, Block Storage hosts can connect to a VxFlex OS Storage cluster.
The Dell EMC VxFlex OS Cinder driver is designed and tested to work with both VxFlex OS and with ScaleIO. The configuration options are identical for both VxFlex OS and ScaleIO.
To find the VxFlex OS documentation:
The Dell EMC VxFlex OS Block Storage driver has been tested against the following versions of ScaleIO and VxFlex OS and found to be compatible:
Please consult the Official VxFlex OS documentation to determine supported operating systems for each version of VxFlex OS or ScaleIO.
Note
Ubuntu users must follow the specific instructions in the VxFlex
OS Deployment Guide for Ubuntu environments. See the Deploying
on Ubuntu Servers
section in VxFlex OS Deployment Guide. See
Official VxFlex OS documentation.
This section explains how to configure and connect the block storage nodes to a VxFlex OS storage cluster.
Edit the cinder.conf
file by adding the configuration below under
a new section (for example, [scaleio]
) and change the enable_backends
setting (in the [DEFAULT]
section) to include this new back end.
The configuration file is usually located at
/etc/cinder/cinder.conf
.
For a configuration example, refer to the example cinder.conf .
Configure the driver name by adding the following parameter:
volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver
The VxFlex OS Gateway provides a REST interface to VxFlex OS.
Configure the Gateway server IP address by adding the following parameter:
san_ip = <VxFlex OS GATEWAY IP>
Multiple Storage Pools and Protection Domains can be listed for use by the virtual machines. The list should include every Protection Domain and Storage Pool pair that you would like Cinder to utilize.
To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.
Configure the available Storage Pools by adding the following parameter:
sio_storage_pools = <Comma-separated list of protection domain:storage pool name>
Block Storage requires a VxFlex OS user with administrative privileges. Dell EMC recommends creating a dedicated OpenStack user account that has an administrative user role.
Refer to the VxFlex OS User Guide for details on user account management.
Configure the user credentials by adding the following parameters:
san_login = <SIO_USER>
san_password = <SIO_PASSWD>
Configure the oversubscription ratio by adding the following parameter under the separate section for VxFlex OS:
sio_max_over_subscription_ratio = <OVER_SUBSCRIPTION_RATIO>
Note
The default value for sio_max_over_subscription_ratio
is 10.0.
Oversubscription is calculated correctly by the Block Storage service
only if the extra specification provisioning:type
appears in the volume type regardless of the default provisioning type.
Maximum oversubscription value supported for VxFlex OS is 10.0.
If provisioning type settings are not specified in the volume type,
the default value is set according to the san_thin_provision
option in the configuration file. The default provisioning type
will be thin
if the option is not specified in the configuration
file. To set the default provisioning type thick
, set
the san_thin_provision
option to false
in the configuration file, as follows:
san_thin_provision = false
The configuration file is usually located in
/etc/cinder/cinder.conf
.
For a configuration example, see:
cinder.conf.
cinder.conf example file
You can update the cinder.conf
file by editing the necessary
parameters as follows:
[DEFAULT]
enabled_backends = scaleio
[scaleio]
volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver
volume_backend_name = scaleio
san_ip = GATEWAY_IP
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
san_login = SIO_USER
san_password = SIO_PASSWD
san_thin_provision = false
Before using attach/detach volume operations VxFlex OS connector must be properly configured. On each node where VxFlex OS SDC is installed do the following:
Create /opt/emc/scaleio/openstack/connector.conf
if it does not
exist.
$ mkdir -p /opt/emc/scaleio/openstack
$ touch /opt/emc/scaleio/openstack/connector.conf
For each VxFlex OS section in the cinder.conf
create the same section in
the /opt/emc/scaleio/openstack/connector.conf
and populate it with
passwords. Example:
[vxflexos]
san_password = SIO_PASSWD
[vxflexos-new]
san_password = SIO2_PASSWD
The VxFlex OS driver supports these configuration options:
Configuration option = Default value | Description |
---|---|
sio_allow_non_padded_volumes = False |
(Boolean) Allow volumes to be created in Storage Pools when zero padding is disabled. This option should not be enabled if multiple tenants will utilize volumes from a shared Storage Pool. |
sio_max_over_subscription_ratio = 10.0 |
(Float) max_over_subscription_ratio setting for the driver. Maximum value allowed is 10.0. |
sio_rest_server_port = 443 |
(String) Gateway REST server port. |
sio_round_volume_capacity = True |
(Boolean) Round volume sizes up to 8GB boundaries. VxFlex OS/ScaleIO requires volumes to be sized in multiples of 8GB. If set to False, volume creation will fail for volumes not sized properly |
sio_server_api_version = None |
(String) VxFlex OS/ScaleIO API version. This value should be left as the default value unless otherwise instructed by technical support. |
sio_server_certificate_path = None |
(String) Server certificate path. |
sio_storage_pools = None |
(String) Storage Pools. Comma separated list of storage pools used to provide volumes. Each pool should be specified as a protection_domain_name:storage_pool_name value |
sio_unmap_volume_before_deletion = False |
(Boolean) Unmap volumes before deletion. |
sio_verify_server_certificate = False |
(Boolean) Verify server certificate. |
sio_protection_domain_id = None |
(String) DEPRECATED: Protection Domain ID. DEPRECATED |
sio_protection_domain_name = None |
(String) DEPRECATED: Protection Domain name. DEPRECATED |
sio_storage_pool_id = None |
(String) DEPRECATED: Storage Pool ID. DEPRECATED |
sio_storage_pool_name = None |
(String) DEPRECATED: Storage Pool name. DEPRECATED |
Volume types can be used to specify characteristics of volumes allocated via the
VxFlex OS Driver. These characteristics are defined as Extra Specs
within
Volume Types
.
When multiple storage pools are specified in the Cinder configuration,
users can specify which pool should be utilized by adding the pool
Extra Spec to the volume type extra-specs and setting the value to the
requested protection_domain:storage_pool.
$ openstack volume type create sio_type_1
$ openstack volume type set --property volume_backend_name=scaleio sio_type_1
$ openstack volume type set --property pool=Domain2:Pool2 sio_type_1
The Block Storage driver supports creation of thin-provisioned and thick-provisioned volumes. The provisioning type settings can be added as an extra specification of the volume type, as follows:
$ openstack volume type create sio_type_thick
$ openstack volume type set --property provisioning:type=thick sio_type_thick
QoS support for the VxFlex OS driver includes the ability to set the following capabilities:
maxIOPS
maxIOPSperGB
maxBWS
maxBWSperGB
The QoS keys above must be created and associated with a volume type. For example:
$ openstack volume qos create qos-limit-iops --consumer back-end --property maxIOPS=5000
$ openstack volume type create sio_limit_iops
$ openstack volume qos associate qos-limit-iops sio_limit_iops
The driver always chooses the minimum between the QoS keys value
and the relevant calculated value of maxIOPSperGB
or maxBWSperGB
.
Since the limits are per SDC, they will be applied after the volume is attached to an instance, and thus to a compute node/SDC.
When using a containerized overcloud, such as one deployed via TripleO or RedHat Openstack version 12 and above, there is an additional step that must be performed.
After ensuring that the Storage Data Client (SDC) is installed on all nodes and
before deploying the overcloud,
modify the TripleO Heat Template for the nova-compute and cinder-volume
containers to add volume mappings for directories containing the SDC
components. These files can normally
be found at
/usr/share/openstack-tripleo-heat-templates/docker/services/nova-compute.yaml
and
/usr/share/openstack-tripleo-heat-templates/docker/services/cinder-volume.yaml
Two lines need to be inserted into the list of mapped volumes in each container.
/opt/emc/scaleio:/opt/emc/scaleio
/bin/emc/scaleio:/bin/emc/scaleio
The changes to the two heat templates are identical, as an example the original nova-compute file should have section that resembles the following:
...
docker_config:
step_4:
nova_compute:
image: &nova_compute_image {get_param: DockerNovaComputeImage}
ipc: host
net: host
privileged: true
user: nova
restart: always
volumes:
list_concat:
- {get_attr: [ContainersCommon, volumes]}
-
- /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro
- /var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro
- /etc/ceph:/var/lib/kolla/config_files/src-ceph:ro
- /dev:/dev
- /lib/modules:/lib/modules:ro
- /etc/iscsi:/etc/iscsi
- /run:/run
- /var/lib/nova:/var/lib/nova:shared
- /var/lib/libvirt:/var/lib/libvirt
- /var/log/containers/nova:/var/log/nova
- /sys/class/net:/sys/class/net
- /sys/bus/pci:/sys/bus/pci
environment:
- KOLLA_CONFIG_STRATEGY=COPY_ALWAYS
...
After modifying the nova-compute file, the section should resemble:
...
docker_config:
step_4:
nova_compute:
image: &nova_compute_image {get_param: DockerNovaComputeImage}
ipc: host
net: host
privileged: true
user: nova
restart: always
volumes:
list_concat:
- {get_attr: [ContainersCommon, volumes]}
-
- /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro
- /var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro
- /etc/ceph:/var/lib/kolla/config_files/src-ceph:ro
- /dev:/dev
- /lib/modules:/lib/modules:ro
- /etc/iscsi:/etc/iscsi
- /run:/run
- /var/lib/nova:/var/lib/nova:shared
- /var/lib/libvirt:/var/lib/libvirt
- /var/log/containers/nova:/var/log/nova
- /sys/class/net:/sys/class/net
- /sys/bus/pci:/sys/bus/pci
- /opt/emc/scaleio:/opt/emc/scaleio
- /bin/emc/scaleio:/bin/emc/scaleio
environment:
- KOLLA_CONFIG_STRATEGY=COPY_ALWAYS
...
Once the nova-compute file is modified, make an identical change to the cinder-volume file.
Once the above changes have been made, deploy the overcloud as usual.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.