Unity driver has been integrated in the OpenStack Block Storage project since the Ocata release. The driver is built on the top of Block Storage framework and a Dell EMC distributed Python package storops.
Software | Version |
---|---|
Unity OE | 4.1.X or newer |
storops | 0.5.7 or newer |
Note
The following instructions should all be performed on Block Storage nodes.
Install storops from pypi:
# pip install storops
Add the following content into /etc/cinder/cinder.conf
:
[DEFAULT]
enabled_backends = unity
[unity]
# Storage protocol
storage_protocol = iSCSI
# Unisphere IP
san_ip = <SAN IP>
# Unisphere username and password
san_login = <SAN LOGIN>
san_password = <SAN PASSWORD>
# Volume driver name
volume_driver = cinder.volume.drivers.dell_emc.unity.Driver
# backend's name
volume_backend_name = Storage_ISCSI_01
Note
These are minimal options for Unity driver, for more options, see Driver options.
Note
(Optional) If you require multipath based data access, perform below steps on both Block Storage and Compute nodes.
Install sysfsutils
, sg3-utils
and multipath-tools
:
# apt-get install multipath-tools sg3-utils sysfsutils
(Required for FC driver in case Auto-zoning Support is disabled) Zone the FC ports of Compute nodes with Unity FC target ports.
Enable Unity storage optimized multipath configuration:
Add the following content into /etc/multipath.conf
blacklist {
# Skip the files uner /dev that are definitely not FC/iSCSI devices
# Different system may need different customization
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
# Skip LUNZ device from VNX/Unity
device {
vendor "DGC"
product "LUNZ"
}
}
defaults {
user_friendly_names no
flush_on_last_del yes
}
devices {
# Device attributed for EMC CLARiiON and VNX/Unity series ALUA
device {
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
path_checker emc_clariion
features "0"
no_path_retry 12
hardware_handler "1 alua"
prio alua
failback immediate
}
}
Restart the multipath service:
# service multipath-tools restart
Enable multipath for image transfer in /etc/cinder/cinder.conf
.
use_multipath_for_image_xfer = True
Restart the cinder-volume
service to load the change.
Enable multipath for volume attache/detach in /etc/nova/nova.conf
.
[libvirt]
...
volume_use_multipath = True
...
Restart the nova-compute
service.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
unity_io_ports = None |
(List) A comma-separated list of iSCSI or FC ports to be used. Each port can be Unix-style glob expressions. |
unity_storage_pool_names = None |
(List) A comma-separated list of storage pool names to be used. |
Specify the list of FC or iSCSI ports to be used to perform the IO. Wild card character is supported. For iSCSI ports, use the following format:
unity_io_ports = spa_eth2, spb_eth2, *_eth3
For FC ports, use the following format:
unity_io_ports = spa_iom_0_fc0, spb_iom_0_fc0, *_iom_0_fc1
List the port ID with the uemcli command:
$ uemcli /net/port/eth show -output csv
...
"spa_eth2","SP A Ethernet Port 2","spa","file, net, iscsi", ...
"spb_eth2","SP B Ethernet Port 2","spb","file, net, iscsi", ...
...
$ uemcli /net/port/fc show -output csv
...
"spa_iom_0_fc0","SP A I/O Module 0 FC Port 0","spa", ...
"spb_iom_0_fc0","SP B I/O Module 0 FC Port 0","spb", ...
...
It is suggested to have multipath configured on Compute nodes for robust data
access in VM instances live migration scenario. Once user_friendly_names no
is set in defaults section of /etc/multipath.conf
, Compute nodes will use
the WWID as the alias for the multipath devices.
To enable multipath in live migration:
Note
Make sure Driver configuration steps are performed before following steps.
Set multipath in /etc/nova/nova.conf
:
[libvirt]
...
volume_use_multipath = True
...
Restart nova-compute service.
Set user_friendly_names no
in /etc/multipath.conf
...
defaults {
user_friendly_names no
}
...
Restart the multipath-tools
service.
Only thin volume provisioning is supported in Unity volume driver.
Unity driver supports maxBWS
and maxIOPS
specs for the back-end
consumer type. maxBWS
represents the Maximum IO/S
absolute limit,
maxIOPS
represents the Maximum Bandwidth (KBPS)
absolute limit on the
Unity respectively.
Unity volume driver supports auto-zoning, and share the same configuration guide for other vendors. Refer to Fibre Channel Zone Manager for detailed configuration steps.
The EMC host team also found LUNZ on all of the hosts, EMC best practice is to present a LUN with HLU 0 to clear any LUNZ devices as they can cause issues on the host. See KB LUNZ Device.
To workaround this issue, Unity driver creates a Dummy LUN (if not present), and adds it to each host to occupy the HLU 0 during volume attachment.
Note
This Dummy LUN is shared among all hosts connected to the Unity.
The default implementation in Block Storage for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.
An effective approach to backups is to create a snapshot for the volume and connect this snapshot to the Block Storage host for volume backup.
Admin is able to enable the SSL verification for any communication against Unity REST API.
By default, the SSL verification is disabled, user can enable it by following steps:
[unity]
...
driver_ssl_cert_verify = True
driver_ssl_cert_path = <path to the CA>
...
If driver_ssl_cert_path is omitted, the system default CA will be used for CA verification.
This driver can support IPv6-based control path and data path.
For control path, please follow below steps:
/etc/cinder/cinder.conf
. Make the san_ip
as Unisphere IPv6 address. For example, san_ip = [fd99:f17b:37d0::100]
.Note: The IPv6 support on control path depends on the fix of cpython bug 32185. Please make sure your Python’s version includes this bug’s fix.
For data path, please follow below steps:
ping
the Unity’s iSCSI IPv6 address from the Cinder node.The user could use os-force_detach action to detach a volume from all its attached hosts. For more detail, please refer to https://developer.openstack.org/api-ref/block-storage/v2/?expanded=force-detach-volume-detail#force-detach-volume
To troubleshoot a failure in OpenStack deployment, the best way is to enable verbose and debug log, at the same time, leverage the build-in Return request ID to caller to track specific Block Storage command logs.
Enable verbose log, set following in /etc/cinder/cinder.conf
and restart
all Block Storage services:
[DEFAULT]
...
debug = True
verbose = True
...
If other projects (usually Compute) are also involved, set debug
and verbose
to True
.
use --debug
to trigger any problematic Block Storage operation:
# cinder --debug create --name unity_vol1 100
You will see the request ID from the console, for example:
DEBUG:keystoneauth:REQ: curl -g -i -X POST
http://192.168.1.9:8776/v2/e50d22bdb5a34078a8bfe7be89324078/volumes -H
"User-Agent: python-cinderclient" -H "Content-Type: application/json" -H
"Accept: application/json" -H "X-Auth-Token:
{SHA1}bf4a85ad64302b67a39ad7c6f695a9630f39ab0e" -d '{"volume": {"status":
"creating", "user_id": null, "name": "unity_vol1", "imageRef": null,
"availability_zone": null, "description": null, "multiattach": false,
"attach_status": "detached", "volume_type": null, "metadata": {},
"consistencygroup_id": null, "source_volid": null, "snapshot_id": null,
"project_id": null, "source_replica": null, "size": 10}}'
DEBUG:keystoneauth:RESP: [202] X-Compute-Request-Id:
req-3a459e0e-871a-49f9-9796-b63cc48b5015 Content-Type: application/json
Content-Length: 804 X-Openstack-Request-Id:
req-3a459e0e-871a-49f9-9796-b63cc48b5015 Date: Mon, 12 Dec 2016 09:31:44 GMT
Connection: keep-alive
Use commands like grep
, awk
to find the error related to the Block
Storage operations.
# grep "req-3a459e0e-871a-49f9-9796-b63cc48b5015" cinder-volume.log
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.