The NetApp unified driver is a Block Storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on either clustered Data ONTAP or E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol.
Also, the NetApp unified driver supports over subscription or over provisioning when thin provisioned Block Storage volumes are in use. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, Block Storage has introduced the concept of storage pools, in which a single Block Storage back end may present one or more logical storage resource pools from which Block Storage will select a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some scheduling logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the Block Storage scheduler, as each NetApp storage container is directly exposed to the Block Storage scheduler as a storage pool. Previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the Block Storage volume would be provisioned into.
The NetApp clustered Data ONTAP storage family represents a configuration group which provides Compute instances access to clustered Data ONTAP storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN that can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by
setting the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None |
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_lun_space_reservation = enabled |
(String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,… |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2 |
(Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None |
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
Note
The driver supports iSCSI CHAP uni-directional authentication.
To enable it, set the use_chap_auth
option to True
.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to NetApp
unified driver, clustered Data ONTAP, and NFS respectively by setting the
volume_driver
, netapp_storage_family
, and netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720 |
(Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_copyoffload_tool_path = None |
(String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None |
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,… |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None |
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
thres_avl_size_perc_start = 20 |
(Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60 |
(Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Description of NFS storage configuration options.
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
A feature was added in the Icehouse release of the NetApp unified driver that enables Image service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
To use this feature, you must configure the Image service, as follows:
default_store
configuration option to file
.filesystem_store_datadir
configuration option to the path
to the Image service NFS export.show_image_direct_url
configuration option to True
.show_multiple_locations
configuration option to True
.filesystem_store_metadata_file
configuration option to a metadata
file. The metadata file should contain a JSON object that contains the
correct information about the NFS export used by the Image service.To use this feature, you must configure the Block Storage service, as follows:
Set the netapp_copyoffload_tool_path
configuration option to the path to
the NetApp Copy Offload binary.
Important
This feature requires that:
Tip
To download the NetApp copy offload binary to be utilized in conjunction
with the netapp_copyoffload_tool_path
configuration option, please visit
the Utility Toolchest page at the NetApp Support portal
(login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the openstack volume type set command.
Extra spec | Type | Description |
---|---|---|
netapp_raid_type |
String | Limit the candidate volume list based on one of the following raid
types: raid4, raid_dp . |
netapp_disk_type |
String | Limit the candidate volume list based on one of the following disk
types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA,
XSAS, or SSD. |
netapp:qos_policy_group [1] |
String | Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume. |
netapp_mirrored |
Boolean | Limit the candidate volume list to only the ones that are mirrored on the storage controller. |
netapp_unmirrored [2] |
Boolean | Limit the candidate volume list to only the ones that are not mirrored on the storage controller. |
netapp_dedup |
Boolean | Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. |
netapp_nodedup |
Boolean | Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. |
netapp_compression |
Boolean | Limit the candidate volume list to only the ones that have compression enabled on the storage controller. |
netapp_nocompression |
Boolean | Limit the candidate volume list to only the ones that have compression disabled on the storage controller. |
netapp_thin_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. |
netapp_thick_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. |
[1] | Please note that this extra spec has a colon (: ) in its name
because it is used by the driver to assign the QoS policy group to
the OpenStack Block Storage volume after it has been provisioned. |
[2] | In the Juno release, these negative-assertion extra specs are
formally deprecated by the NetApp unified driver. Instead of using
the deprecated negative-assertion extra specs (for example,
netapp_unmirrored ) with a value of true , use the
corresponding positive-assertion extra spec (for example,
netapp_mirrored ) with a value of false . |
The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in Block Storage to work with the iSCSI storage protocol.
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the Block Storage driver for E-Series. In order for Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:
use_multipath_for_image_xfer
option should be set to True
in the
cinder.conf
file within the driver-specific stanza (for example,
[myDriver]
).volume_use_multipath
option should be set to True
in the
nova.conf
file within the [libvirt]
stanza.
In versions prior to Newton, the option was called iscsi_use_multipath
.Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, E-Series, and iSCSI respectively by setting the
volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Note
To use the E-Series driver, you must override the default value of
netapp_storage_family
with eseries
.
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_controller_ips = None |
(String) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. |
netapp_enable_multiattach = False |
(Boolean) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,… |
netapp_sa_password = None |
(String) Password for the NetApp E-Series storage array. |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_webservice_path = /devmgr/v2 |
(String) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. |
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with an E-Series storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure thin provisioning for a storage back end.
Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the openstack volume type set command.
Extra spec | Type | Description |
---|---|---|
netapp_thin_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. |
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.