CoprHD is an open source software-defined storage controller and API platform. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers. For more details, see CoprHD.
EMC ViPR Controller is the commercial offering of CoprHD. These same volume drivers can also be considered as EMC ViPR Controller Block Storage drivers.
CoprHD version 3.0 is required. Refer to the CoprHD documentation for installation and configuration instructions.
If you are using these drivers to integrate with EMC ViPR Controller, use EMC ViPR Controller 3.0.
The following operations are supported:
The following table contains the configuration options specific to the CoprHD volume driver.
Configuration option = Default value | Description |
---|---|
coprhd_emulate_snapshot = False |
(Boolean) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX |
coprhd_hostname = None |
(String) Hostname for the CoprHD Instance |
coprhd_password = None |
(String) Password for accessing the CoprHD Instance |
coprhd_port = 4443 |
(Port(min=0, max=65535)) Port for the CoprHD Instance |
coprhd_project = None |
(String) Project to utilize within the CoprHD Instance |
coprhd_scaleio_rest_gateway_host = None |
(String) Rest Gateway IP or FQDN for Scaleio |
coprhd_scaleio_rest_gateway_port = 4984 |
(Port(min=0, max=65535)) Rest Gateway Port for Scaleio |
coprhd_scaleio_rest_server_password = None |
(String) Rest Gateway Password |
coprhd_scaleio_rest_server_username = None |
(String) Username for Rest Gateway |
coprhd_tenant = None |
(String) Tenant to utilize within the CoprHD Instance |
coprhd_username = None |
(String) Username for accessing the CoprHD Instance |
coprhd_varray = None |
(String) Virtual Array to utilize within the CoprHD Instance |
scaleio_server_certificate_path = None |
(String) Server certificate path |
scaleio_verify_server_certificate = False |
(Boolean) verify server certificate |
This involves setting up the CoprHD environment first and then configuring the CoprHD Block Storage driver.
The CoprHD environment must meet specific configuration requirements to support the OpenStack Block Storage driver.
Note
Use each back end to manage one virtual array and one virtual storage pool. However, the user can have multiple instances of CoprHD Block Storage driver, sharing the same virtual array and virtual storage pool.
cinder.conf
Modify /etc/cinder/cinder.conf
by adding the following lines,
substituting values for your environment:
[coprhd-iscsi]
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
volume_backend_name = coprhd-iscsi
coprhd_hostname = <CoprHD-Host-Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage
If you use the ScaleIO back end, add the following lines:
coprhd_scaleio_rest_gateway_host = <IP or FQDN>
coprhd_scaleio_rest_gateway_port = 443
coprhd_scaleio_rest_server_username = <username>
coprhd_scaleio_rest_server_password = <password>
scaleio_verify_server_certificate = True or False
scaleio_server_certificate_path = <path-of-certificate-for-validation>
Specify the driver using the enabled_backends
parameter:
enabled_backends = coprhd-iscsi
Note
To utilize the Fibre Channel driver, replace the
volume_driver
line above with:
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
Note
To utilize the ScaleIO driver, replace the volume_driver
line
above with:
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver
Note
Set coprhd_emulate_snapshot
to True if the CoprHD vpool has
VMAX or VPLEX as the back-end storage. For these type of back-end
storages, when a user tries to create a snapshot, an actual volume
gets created in the back end.
Modify the rpc_response_timeout
value in /etc/cinder/cinder.conf
to
at least 5 minutes. If this entry does not already exist within the
cinder.conf
file, add it in the [DEFAULT]
section:
[DEFAULT]
# ...
rpc_response_timeout = 300
Now, restart the cinder-volume
service.
Volume type creation and extra specs
Create OpenStack volume types:
$ openstack volume type create <typename>
Map the OpenStack volume type to the CoprHD virtual pool:
$ openstack volume type set <typename> --property CoprHD:VPOOL=<CoprHD-PoolName>
Map the volume type created to appropriate back-end driver:
$ openstack volume type set <typename> --property volume_backend_name=<VOLUME_BACKEND_DRIVER>
cinder.conf
Add or modify the following entries if you are planning to use multiple back-end drivers:
enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio
Add the following at the end of the file:
[coprhddriver-iscsi]
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
volume_backend_name = EMCCoprHDISCSIDriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
[coprhddriver-fc]
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
volume_backend_name = EMCCoprHDFCDriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
[coprhddriver-scaleio]
volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver
volume_backend_name = EMCCoprHDScaleIODriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
coprhd_scaleio_rest_gateway_host = <ScaleIO Rest Gateway>
coprhd_scaleio_rest_gateway_port = 443
coprhd_scaleio_rest_server_username = <rest gateway username>
coprhd_scaleio_rest_server_password = <rest gateway password>
scaleio_verify_server_certificate = True or False
scaleio_server_certificate_path = <certificate path>
Restart the cinder-volume
service.
Volume type creation and extra specs
Setup the volume-types
and volume-type
to volume-backend
association:
$ openstack volume type create "CoprHD High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver
$ openstack volume type create "CoprHD High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver
$ openstack volume type create "CoprHD performance SIO"
$ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf"
$ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver
Install the ScaleIO SDC on the compute host.
The compute host must be added as the SDC to the ScaleIO MDS using the below commands:
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs
(starting with primary MDM and separated by comma)
Example:
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip
10.247.78.45,10.247.78.46,10.247.78.47
This step has to be repeated whenever the SDC (compute host in this case) is rebooted.
To enable the support of consistency group and consistency group snapshot
operations, use a text editor to edit the file /etc/cinder/policy.json
and
change the values of the below fields as specified. Upon editing the file,
restart the c-api
service:
"consistencygroup:create" : "",
"consistencygroup:delete": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:update": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
All the resources like volume, consistency group, snapshot, and consistency group snapshot will use the display name in OpenStack for naming in the back-end storage.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.