Configure an External NetApp Deployment as the Storage Backend¶
Configure an external NetApp deployment as the storage backend, after system installation using a StarlingX-provided ansible playbook.
Note
It is not currently possible to setup NetApp in subclouds via orchestration. Ansible playbook install_netapp_backend.yml must be executed manually in each host.
Prerequisites
StarlingX must be installed and fully deployed before performing this procedure.
Procedure
Configure the storage network.
Follow the next steps to configure storage network
If you have not done so already, create an address pool for the storage network. This can be done at any time.
system addrpool-add --ranges <start_address>-<end_address> <name_of_address_pool> <network_address> <network_prefix>
For example:
(keystone_admin)$ system addrpool-add --ranges 10.10.20.1-10.10.20.100 storage-pool 10.10.20.0 24
If you have not done so already, create the storage network using the address pool.
For example:
(keystone_admin)$ system addrpool-list | grep storage-pool | awk '{print$2}' | xargs system network-add storage-net storage trueFor each host in the system, do the following:
Lock the host.
(keystone_admin)$ system host-lock <hostname>
Create an interface using the address pool.
For example:
(keystone_admin)$ system host-if-modify -n storage0 -c platform --ipv4-mode static --ipv4-pool storage-pool controller-0 enp0s9
Assign the interface to the network.
For example:
(keystone_admin)$ system interface-network-assign controller-0 storage0 storage-net
Unlock the system.
(keystone_admin)$ system host-unlock <hostname>
Configure NetApps configurable parameters and run the provided install_netapp_backend.yml ansible playbook to enable connectivity to NetApp as a storage backend for StarlingX.
Provide NetApp backend configurable parameters in an overrides yaml file.
You can make changes-in-place to your existing localhost.yml file or create another in an alternative location. In either case, you also have the option of using an ansible vault to secure/encrypt the localhost.yaml file containing sensitive data, i.e, using ansible-vault create $HOME/localhost.yml or ansible-vault edit $HOME/localhost.yml commands.
NetApp backend supports NetApp ONTAP NAS (NFS) and NetApp ONTAP SAN (iSCSI and Fibre Channel) configurations.
The following examples show minimal configuration options for ONTAP NAS and SAN in localhost.yaml:
Note
This file is sectioned into
netapp_k8s_storageclasses,netapp_k8s_snapshotstorageclasses,netapp_backends, andtbc_secret. You can add multiple backends and/or storage classes.NetApp ONTAP NAS Configuration (NFS):
ansible_become_pass: <sysadmin password> netapp_k8s_storageclasses: - metadata: name: netapp-nas provisioner: csi.trident.netapp.io parameters: backendType: ontap-nas netapp_k8s_snapshotstorageclasses: - metadata: name: netapp-snapshot driver: csi.trident.netapp.io deletionPolicy: Delete netapp_backends: - metadata: name: nas-backend spec: version: 1 storageDriverName: ontap-nas backendName: nas-backend managementLIF: "<management IP>" dataLIF: "<data IP>" svm: "<svm>" credentials: name: backend-tbc-secret tbc_secret: - metadata: name: backend-tbc-secret type: Opaque stringData: username: "<netapp/svm user>" password: "<netapp/svm password>"For more details about the options, see the documentation: https://docs.netapp.com/us-en/trident/trident-use/ontap-nas-examples.html
NetApp ONTAP SAN Configuration (iSCSI / FC):
Note
If an iSCSI backend is configured, the
find_multipathssetting in/etc/multipath.confwill be automatically changed tono.ansible_become_pass: <sysadmin password> netapp_k8s_storageclasses: - metadata: name: netapp-san provisioner: csi.trident.netapp.io parameters: backendType: ontap-san netapp_k8s_snapshotstorageclasses: - metadata: name: netapp-snapshot driver: csi.trident.netapp.io deletionPolicy: Delete netapp_backends: - metadata: name: san-backend spec: version: 1 storageDriverName: ontap-san sanType: "<iscsi or fcp>" backendName: san-backend managementLIF: "<management IP>" dataLIF: "<data IP>" svm: "<svm>" credentials: name: backend-tbc-secret tbc_secret: - metadata: name: backend-tbc-secret type: Opaque stringData: username: "<netapp/svm user>" password: "<netapp/svm password>"If
sanTypeis not provided, the iSCSI protocol will be used by default.For more details about the options, see the documentation: https://docs.netapp.com/us-en/trident/trident-use/ontap-san-examples.html
The following parameters are optional:
trident_force_reinstallForce a new installation if Trident is already installed. The default is false.
trident_setup_dirSet a staging directory for generated configuration files. The default is /tmp/trident.
trident_clean_folderClear the staging directory of the generated configuration files. The default is true.
trident_namespaceSet this option to use an alternate Kubernetes namespace. The default is ‘trident’.
trident_install_extra_paramsAdd extra space-separated parameters when installing trident.
If no option is provided, the default option defined in the file will be used: https://opendev.org/starlingx/ansible-playbooks/src/branch/master/playbookconfig/src/playbooks/host_vars/netapp/default.yml
Note
To use IPv6 addressing, you must add the following to your configuration:
trident_install_extra_params: "--use-ipv6"
Note
By default, NetApp is configured to have
777as unixPermissions. StarlingX recommends changing these settings to make it more secure, for example,"unixPermissions": "755". Ensure that the right permissions are used, and there is no conflict with container security.Do NOT use
777asunixPermissionsto configure an external NetApp deployment as the Storage backend. For more information, contact NetApp, at https://www.netapp.com/.Run the playbook.
The following example uses the
-e "override_files_dir=<directory>"option to specify a customized location for the localhost.yml file.ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/install_netapp_backend.yml -e "override_files_dir=</home/sysadmin/trident>"
Upon successful launch, there will be one Trident pod running on each node, plus an extra pod for the REST API running on one of the controller nodes.
Confirm that the pods launched successfully.
In an all-in-one simplex environment you will see pods similar to the following:
(keystone_admin)$ kubectl -n trident get pods NAME READY STATUS RESTARTS AGE trident-controller-7ffbfcfd8f-q76nz 5/5 Running 0 0h1m trident-node-linux-dp84f 2/2 Running 0 0h1m
Checking configured TBCs.
To view the configured TBCs, run the following command:
(keystone_admin)$ kubectl -n trident get tbc
This will list the TBCs in the trident namespace, allowing you to check the status and configuration of storage volume provisioning.
Postrequisites
To configure a persistent volume claim for the NetApp backend, add the
appropriate netapp_backends name you set up (netapp-nas-backend or
netapp-san-backend) to the persistent volume claim’s yaml configuration
file. For more information about this file, see
StarlingX User Tasks: Create ReadWriteOnce Persistent Volume Claims.
Configure NetApps Using a Private Docker Registry¶
Use the docker_registries parameter to pull from the local registry rather
than public ones.
You must first push the files to the local registry.