This section documents setting up a virtual machine for doing Swift development. The virtual machine will emulate running a four node Swift cluster. To begin:
Much of the configuration described in this guide requires escalated
administrator (root
) privileges; however, we assume that administrator logs
in as an unprivileged user and can use sudo
to run privileged commands.
Swift processes also run under a separate user and group, set by configuration
option, and referenced as <your-user-name>:<your-group-name>
. The default user
is swift
, which may not exist on your system. These instructions are
intended to allow a developer to use his/her username for
<your-user-name>:<your-group-name>
.
On apt
based systems:
sudo apt-get update
sudo apt-get install curl gcc memcached rsync sqlite3 xfsprogs \
git-core libffi-dev python-setuptools \
liberasurecode-dev libssl-dev
sudo apt-get install python-coverage python-dev python-nose \
python-xattr python-eventlet \
python-greenlet python-pastedeploy \
python-netifaces python-pip python-dnspython \
python-mock
On yum
based systems:
sudo yum update
sudo yum install curl gcc memcached rsync sqlite xfsprogs git-core \
libffi-devel xinetd liberasurecode-devel \
openssl-devel python-setuptools \
python-coverage python-devel python-nose \
pyxattr python-eventlet \
python-greenlet python-paste-deploy \
python-netifaces python-pip python-dns \
python-mock
On OpenSuse
:
sudo zypper install curl gcc memcached rsync sqlite3 xfsprogs git-core \
libffi-devel liberasurecode-devel python2-setuptools \
libopenssl-devel
sudo zypper install python2-coverage python-devel python2-nose \
python-xattr python-eventlet python2-greenlet \
python2-netifaces python2-pip python2-dnspython \
python2-mock
Note: This installs necessary system dependencies and most of the python dependencies. Later in the process setuptools/distribute or pip will install and/or upgrade packages.
Next, choose either Using a partition for storage or Using a loopback device for storage.
If you are going to use a separate partition for Swift data, be sure to add another device when creating the VM, and follow these instructions:
Set up a single partition:
sudo fdisk /dev/sdb sudo mkfs.xfs /dev/sdb1Edit
/etc/fstab
and add:/dev/sdb1 /mnt/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0Create the mount point and the individualized links:
sudo mkdir /mnt/sdb1 sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* sudo mkdir /srv for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 \ /var/run/swift sudo chown -R ${USER}:${USER} /var/run/swift # **Make sure to include the trailing slash after /srv/$x/** for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; doneNote: For OpenSuse users, a user’s primary group is users, so you have 2 options:
Change ${USER}:${USER} to ${USER}:users in all references of this guide; or
Create a group for your username and add yourself to it:
sudo groupadd ${USER} && sudo gpasswd -a ${USER} ${USER}Note: We create the mount points and mount the storage disk under /mnt/sdb1. This disk will contain one directory per simulated swift node, each owned by the current swift user.
We then create symlinks to these directories under /srv. If the disk sdb is unmounted, files will not be written under /srv/*, because the symbolic link destination /mnt/sdb1/* will not exist. This prevents disk sync operations from writing to the root partition in the event a drive is unmounted.
Next, skip to Common Post-Device Setup.
If you want to use a loopback device instead of another partition, follow these instructions:
Create the file for the loopback device:
sudo mkdir /srv sudo truncate -s 1GB /srv/swift-disk sudo mkfs.xfs /srv/swift-diskModify size specified in the
truncate
command to make a larger or smaller partition as needed.Edit /etc/fstab and add:
/srv/swift-disk /mnt/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0Create the mount point and the individualized links:
sudo mkdir /mnt/sdb1 sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 \ /var/run/swift sudo chown -R ${USER}:${USER} /var/run/swift # **Make sure to include the trailing slash after /srv/$x/** for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; doneNote: For OpenSuse users, a user’s primary group is users, so you have 2 options:
Change ${USER}:${USER} to ${USER}:users in all references of this guide; or
Create a group for your username and add yourself to it:
sudo groupadd ${USER} && sudo gpasswd -a ${USER} ${USER}Note: We create the mount points and mount the loopback file under /mnt/sdb1. This file will contain one directory per simulated swift node, each owned by the current swift user.
We then create symlinks to these directories under /srv. If the loopback file is unmounted, files will not be written under /srv/*, because the symbolic link destination /mnt/sdb1/* will not exist. This prevents disk sync operations from writing to the root partition in the event a drive is unmounted.
Add the following lines to /etc/rc.local
(before the exit 0
):
mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4
chown <your-user-name>:<your-group-name> /var/cache/swift*
mkdir -p /var/run/swift
chown <your-user-name>:<your-group-name> /var/run/swift
Note that on some systems you might have to create /etc/rc.local
.
On Fedora 19 or later, you need to place these in /etc/rc.d/rc.local
.
On OpenSuse you need to place these in /etc/init.d/boot.local
.
Tests require having an XFS directory available in /tmp
or in the
TMPDIR
environment variable. To set up /tmp
with an XFS filesystem,
do the following:
cd ~
truncate -s 1GB xfs_file # create 1GB fil for XFS in your home directory
mkfs.xfs xfs_file
sudo mount -o loop,noatime,nodiratime xfs_file /tmp
sudo chmod -R 1777 /tmp
To persist this, edit and add the following to /etc/fstab
:
/home/<your-user-name>/xfs_file /tmp xfs rw,noatime,nodiratime,attr2,inode64,noquota 0 0
Check out the python-swiftclient repo:
cd $HOME; git clone https://github.com/openstack/python-swiftclient.gitBuild a development installation of python-swiftclient:
cd $HOME/python-swiftclient; sudo python setup.py develop; cd -Ubuntu 12.04 users need to install python-swiftclient’s dependencies before the installation of python-swiftclient. This is due to a bug in an older version of setup tools:
cd $HOME/python-swiftclient; sudo pip install -r requirements.txt; sudo python setup.py develop; cd -Check out the swift repo:
git clone https://github.com/openstack/swift.gitBuild a development installation of swift:
cd $HOME/swift; sudo pip install --no-binary cryptography -r requirements.txt; sudo python setup.py develop; cd -Note: Due to a difference in libssl.so naming in OpenSuse to other Linux distros the wheel/binary wont work so the cryptography must be built, thus the
--no-binary cryptography
.Fedora 19 or later users might have to perform the following if development installation of swift fails:
sudo pip install -U xattrInstall swift’s test dependencies:
cd $HOME/swift; sudo pip install -r test-requirements.txt
Create
/etc/rsyncd.conf
:sudo cp $HOME/swift/doc/saio/rsyncd.conf /etc/ sudo sed -i "s/<your-user-name>/${USER}/" /etc/rsyncd.confHere is the default
rsyncd.conf
file contents maintained in the repo that is copied and fixed up above:uid = <your-user-name> gid = <your-user-name> log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 0.0.0.0 [account6012] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/account6012.lock [account6022] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/account6022.lock [account6032] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/account6032.lock [account6042] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/account6042.lock [container6011] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/container6011.lock [container6021] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/container6021.lock [container6031] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/container6031.lock [container6041] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/container6041.lock [object6010] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/object6010.lock [object6020] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/object6020.lock [object6030] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/object6030.lock [object6040] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/object6040.lockOn Ubuntu, edit the following line in
/etc/default/rsync
:RSYNC_ENABLE=trueOn Fedora, edit the following line in
/etc/xinetd.d/rsync
:disable = noOne might have to create the above files to perform the edits.
On OpenSuse, nothing needs to happen here.
On platforms with SELinux in
Enforcing
mode, either set toPermissive
:sudo setenforce PermissiveOr just allow rsync full access:
sudo setsebool -P rsync_full_access 1Start the rsync daemon
On Ubuntu 14.04, run:
sudo service rsync restartOn Ubuntu 16.04, run:
sudo systemctl enable rsync sudo systemctl start rsyncOn Fedora, run:
sudo systemctl restart xinetd.service sudo systemctl enable rsyncd.service sudo systemctl start rsyncd.serviceOn OpenSuse, run:
sudo systemctl enable rsyncd.service sudo systemctl start rsyncd.serviceOn other xinetd based systems simply run:
sudo service xinetd restartVerify rsync is accepting connections for all servers:
rsync rsync://pub@localhost/You should see the following output from the above command:
account6012 account6022 account6032 account6042 container6011 container6021 container6031 container6041 object6010 object6020 object6030 object6040
On non-Ubuntu distros you need to ensure memcached is running:
sudo service memcached start
sudo chkconfig memcached on
or:
sudo systemctl enable memcached.service
sudo systemctl start memcached.service
The tempauth middleware stores tokens in memcached. If memcached is not running, tokens cannot be validated, and accessing Swift becomes impossible.
Install the swift rsyslogd configuration:
sudo cp $HOME/swift/doc/saio/rsyslog.d/10-swift.conf /etc/rsyslog.d/Note: OpenSuse may have the systemd logger installed, so if you want this to work, you need to install rsyslog:
sudo zypper install rsyslog sudo systemctl start rsyslog.service sudo systemctl enable rsyslog.serviceBe sure to review that conf file to determine if you want all the logs in one file vs. all the logs separated out, and if you want hourly logs for stats processing. For convenience, we provide its default contents below:
# Uncomment the following to have a log containing all logs together #local1,local2,local3,local4,local5.* /var/log/swift/all.log # Uncomment the following to have hourly proxy logs for stats processing #$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%" #local1.*;local1.!notice ?HourlyProxyLog local1.*;local1.!notice /var/log/swift/proxy.log local1.notice /var/log/swift/proxy.error local1.* ~ local2.*;local2.!notice /var/log/swift/storage1.log local2.notice /var/log/swift/storage1.error local2.* ~ local3.*;local3.!notice /var/log/swift/storage2.log local3.notice /var/log/swift/storage2.error local3.* ~ local4.*;local4.!notice /var/log/swift/storage3.log local4.notice /var/log/swift/storage3.error local4.* ~ local5.*;local5.!notice /var/log/swift/storage4.log local5.notice /var/log/swift/storage4.error local5.* ~ local6.*;local6.!notice /var/log/swift/expirer.log local6.notice /var/log/swift/expirer.error local6.* ~Edit
/etc/rsyslog.conf
and make the following change (usually in the “GLOBAL DIRECTIVES” section):$PrivDropToGroup admIf using hourly logs (see above) perform:
sudo mkdir -p /var/log/swift/hourlyOtherwise perform:
sudo mkdir -p /var/log/swiftSetup the logging directory and start syslog:
On Ubuntu:
sudo chown -R syslog.adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo service rsyslog restartOn Fedora and OpenSuse:
sudo chown -R root:adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo systemctl restart rsyslog.service
After performing the following steps, be sure to verify that Swift has access to resulting configuration files (sample configuration files are provided with all defaults in line-by-line comments).
Optionally remove an existing swift directory:
sudo rm -rf /etc/swiftPopulate the
/etc/swift
directory itself:cd $HOME/swift/doc; sudo cp -r saio/swift /etc/swift; cd - sudo chown -R ${USER}:${USER} /etc/swiftUpdate
<your-user-name>
references in the Swift config files:find /etc/swift/ -name \*.conf | xargs sudo sed -i "s/<your-user-name>/${USER}/"
The contents of the configuration files provided by executing the above commands are as follows:
/etc/swift/swift.conf
[swift-hash] # random unique strings that can never change (DO NOT LOSE) # Use only printable chars (python -c "import string; print(string.printable)") swift_hash_path_prefix = changeme swift_hash_path_suffix = changeme [storage-policy:0] name = gold policy_type = replication default = yes [storage-policy:1] name = silver policy_type = replication [storage-policy:2] name = ec42 policy_type = erasure_coding ec_type = liberasurecode_rs_vand ec_num_data_fragments = 4 ec_num_parity_fragments = 2
/etc/swift/proxy-server.conf
[DEFAULT] bind_ip = 127.0.0.1 bind_port = 8080 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL1 eventlet_debug = true [pipeline:main] # Yes, proxy-logging appears twice. This is so that # middleware-originated requests get logged too. pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes symlink proxy-logging proxy-server [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck [filter:proxy-logging] use = egg:swift#proxy_logging [filter:bulk] use = egg:swift#bulk [filter:ratelimit] use = egg:swift#ratelimit [filter:crossdomain] use = egg:swift#crossdomain [filter:dlo] use = egg:swift#dlo [filter:slo] use = egg:swift#slo [filter:container_sync] use = egg:swift#container_sync current = //saio/saio_endpoint [filter:tempurl] use = egg:swift#tempurl [filter:tempauth] use = egg:swift#tempauth user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test2_tester2 = testing2 .admin user_test_tester3 = testing3 [filter:staticweb] use = egg:swift#staticweb [filter:account-quotas] use = egg:swift#account_quotas [filter:container-quotas] use = egg:swift#container_quotas [filter:cache] use = egg:swift#memcache [filter:gatekeeper] use = egg:swift#gatekeeper [filter:versioned_writes] use = egg:swift#versioned_writes allow_versioned_writes = true [filter:copy] use = egg:swift#copy [filter:listing_formats] use = egg:swift#listing_formats [filter:symlink] use = egg:swift#symlink # To enable, add the s3api middleware to the pipeline before tempauth [filter:s3api] use = egg:swift#s3api # Example to create root secret: `openssl rand -base64 32` [filter:keymaster] use = egg:swift#keymaster encryption_root_secret = changeme/changeme/changeme/changeme/change/= # To enable use of encryption add both middlewares to pipeline, example: # <other middleware> keymaster encryption proxy-logging proxy-server [filter:encryption] use = egg:swift#encryption [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true
/etc/swift/object-expirer.conf
[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: log_name = object-expirer log_facility = LOG_LOCAL6 log_level = INFO #log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [object-expirer] interval = 300 # auto_create_account_prefix = . # report_interval = 300 # concurrency is the level of concurrency o use to do the work, this value # must be set to at least 1 # concurrency = 1 # processes is how many parts to divide the work into, one part per process # that will be doing the work # processes set 0 means that a single process will be doing all the work # processes can also be specified on the command line and will override the # config value # processes = 0 # process is which of the parts a particular process will work on # process can also be specified on the command line and will override the config # value # process is "zero based", if you want to use 3 processes, you should run # processes with process set to 0, 1, and 2 # process = 0 [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/container-reconciler.conf
[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: # log_name = swift # log_facility = LOG_LOCAL0 # log_level = INFO # log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [container-reconciler] # reclaim_age = 604800 # interval = 300 # request_tries = 3 [pipeline:main] pipeline = catch_errors proxy-logging cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:proxy-logging] use = egg:swift#proxy_logging [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/container-sync-realms.conf
[saio] key = changeme key2 = changeme cluster_saio_endpoint = http://127.0.0.1:8080/v1/
/etc/swift/account-server/1.conf
[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6012 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/1.conf
[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6011 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/object-server/1.conf
[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6010 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor]
/etc/swift/account-server/2.conf
[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6022 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/2.conf
[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6021 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/object-server/2.conf
[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6020 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor]
/etc/swift/account-server/3.conf
[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6032 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/3.conf
[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6031 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/object-server/3.conf
[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6030 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor]
/etc/swift/account-server/4.conf
[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6042 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/4.conf
[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6041 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/object-server/4.conf
[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6040 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor]
Copy the SAIO scripts for resetting the environment:
mkdir -p $HOME/bin cd $HOME/swift/doc; cp saio/bin/* $HOME/bin; cd - chmod +x $HOME/bin/*Edit the
$HOME/bin/resetswift
scriptThe template
resetswift
script looks like the following:#!/bin/bash set -e swift-init all kill # Remove the following line if you did not set up rsyslog for individual logging: sudo find /var/log/swift -type f -exec rm -f {} \; if cut -d' ' -f2 /proc/mounts | grep -q /mnt/sdb1 ; then sudo umount /mnt/sdb1 fi # If you are using a loopback device set SAIO_BLOCK_DEVICE to "/srv/swift-disk" sudo mkfs.xfs -f ${SAIO_BLOCK_DEVICE:-/dev/sdb1} sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 sudo rm -f /var/log/debug /var/log/messages /var/log/rsyncd.log /var/log/syslog find /var/cache/swift* -type f -name *.recon -exec rm -f {} \; if [ "`type -t systemctl`" == "file" ]; then sudo systemctl restart rsyslog sudo systemctl restart memcached else sudo service rsyslog restart sudo service memcached restart fiIf you are using a loopback device add an environment var to substitute
/dev/sdb1
with/srv/swift-disk
:echo "export SAIO_BLOCK_DEVICE=/srv/swift-disk" >> $HOME/.bashrcIf you did not set up rsyslog for individual logging, remove the
find /var/log/swift...
line:sed -i "/find \/var\/log\/swift/d" $HOME/bin/resetswiftInstall the sample configuration file for running tests:
cp $HOME/swift/test/sample.conf /etc/swift/test.confThe template
test.conf
looks like the following:[func_test] # Sample config for Swift with tempauth auth_host = 127.0.0.1 auth_port = 8080 auth_ssl = no auth_prefix = /auth/ # Sample config for Swift with Keystone v2 API. # For keystone v2 change auth_version to 2 and auth_prefix to /v2.0/. # And "allow_account_management" should not be set "true". #auth_version = 3 #auth_host = localhost #auth_port = 5000 #auth_ssl = no #auth_prefix = /v3/ # Primary functional test account (needs admin access to the account) account = test username = tester password = testing s3_access_key = test:tester s3_secret_key = testing # User on a second account (needs admin access to the account) account2 = test2 username2 = tester2 password2 = testing2 # User on same account as first, but without admin access username3 = tester3 password3 = testing3 # s3api requires the same account with the primary one and different users s3_access_key2 = test:tester3 s3_secret_key2 = testing3 # Fourth user is required for keystone v3 specific tests. # Account must be in a non-default domain. #account4 = test4 #username4 = tester4 #password4 = testing4 #domain4 = test-domain # Fifth user is required for service token-specific tests. # The account must be different from the primary test account. # The user must not have a group (tempauth) or role (keystoneauth) on # the primary test account. The user must have a group/role that is unique # and not given to the primary tester and is specified in the options # <prefix>_require_group (tempauth) or <prefix>_service_roles (keystoneauth). #account5 = test5 #username5 = tester5 #password5 = testing5 # The service_prefix option is used for service token-specific tests. # If service_prefix or username5 above is not supplied, the tests are skipped. # To set the value and enable the service token tests, look at the # reseller_prefix option in /etc/swift/proxy-server.conf. There must be at # least two prefixes. If not, add a prefix as follows (where we add SERVICE): # reseller_prefix = AUTH, SERVICE # The service_prefix must match the <prefix> used in <prefix>_require_group # (tempauth) or <prefix>_service_roles (keystoneauth); for example: # SERVICE_require_group = service # SERVICE_service_roles = service # Note: Do not enable service token tests if the first prefix in # reseller_prefix is the empty prefix AND the primary functional test # account contains an underscore. #service_prefix = SERVICE # Sixth user is required for access control tests. # Account must have a role for reseller_admin_role(keystoneauth). #account6 = test #username6 = tester6 #password6 = testing6 collate = C # Only necessary if a pre-existing server uses self-signed certificate insecure = no # Tests that are dependent on domain_remap middleware being installed also # require one of the domain_remap storage_domain values to be specified here, # otherwise those tests will be skipped. storage_domain = [unit_test] fake_syslog = False [probe_test] # check_server_timeout = 30 # validate_rsync = false [swift-constraints] # The functional test runner will try to use the constraint values provided in # the swift-constraints section of test.conf. # # If a constraint value does not exist in that section, or because the # swift-constraints section does not exist, the constraints values found in # the /info API call (if successful) will be used. # # If a constraint value cannot be found in the /info results, either because # the /info API call failed, or a value is not present, the constraint value # used will fall back to those loaded by the constraints module at time of # import (which will attempt to load /etc/swift/swift.conf, see the # swift.common.constraints module for more information). # # Note that the cluster must have "sane" values for the test suite to pass # (for some definition of sane). # #max_file_size = 5368709122 #max_meta_name_length = 128 #max_meta_value_length = 256 #max_meta_count = 90 #max_meta_overall_size = 4096 #max_header_size = 8192 #extra_header_count = 0 #max_object_name_length = 1024 #container_listing_limit = 10000 #account_listing_limit = 10000 #max_account_name_length = 256 #max_container_name_length = 256 # Newer swift versions default to strict cors mode, but older ones were the # opposite. #strict_cors_mode = trueAdd an environment variable for running tests below:
echo "export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf" >> $HOME/.bashrcBe sure that your
PATH
includes thebin
directory:echo "export PATH=${PATH}:$HOME/bin" >> $HOME/.bashrcSource the above environment variables into your current environment:
. $HOME/.bashrcConstruct the initial rings using the provided script:
remakerings
The
remakerings
script looks like the following:#!/bin/bash set -e cd /etc/swift rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz swift-ring-builder object.builder create 10 3 1 swift-ring-builder object.builder add r1z1-127.0.0.1:6010/sdb1 1 swift-ring-builder object.builder add r1z2-127.0.0.2:6020/sdb2 1 swift-ring-builder object.builder add r1z3-127.0.0.3:6030/sdb3 1 swift-ring-builder object.builder add r1z4-127.0.0.4:6040/sdb4 1 swift-ring-builder object.builder rebalance swift-ring-builder object-1.builder create 10 2 1 swift-ring-builder object-1.builder add r1z1-127.0.0.1:6010/sdb1 1 swift-ring-builder object-1.builder add r1z2-127.0.0.2:6020/sdb2 1 swift-ring-builder object-1.builder add r1z3-127.0.0.3:6030/sdb3 1 swift-ring-builder object-1.builder add r1z4-127.0.0.4:6040/sdb4 1 swift-ring-builder object-1.builder rebalance swift-ring-builder object-2.builder create 10 6 1 swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb1 1 swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb5 1 swift-ring-builder object-2.builder add r1z2-127.0.0.2:6020/sdb2 1 swift-ring-builder object-2.builder add r1z2-127.0.0.2:6020/sdb6 1 swift-ring-builder object-2.builder add r1z3-127.0.0.3:6030/sdb3 1 swift-ring-builder object-2.builder add r1z3-127.0.0.3:6030/sdb7 1 swift-ring-builder object-2.builder add r1z4-127.0.0.4:6040/sdb4 1 swift-ring-builder object-2.builder add r1z4-127.0.0.4:6040/sdb8 1 swift-ring-builder object-2.builder rebalance swift-ring-builder container.builder create 10 3 1 swift-ring-builder container.builder add r1z1-127.0.0.1:6011/sdb1 1 swift-ring-builder container.builder add r1z2-127.0.0.2:6021/sdb2 1 swift-ring-builder container.builder add r1z3-127.0.0.3:6031/sdb3 1 swift-ring-builder container.builder add r1z4-127.0.0.4:6041/sdb4 1 swift-ring-builder container.builder rebalance swift-ring-builder account.builder create 10 3 1 swift-ring-builder account.builder add r1z1-127.0.0.1:6012/sdb1 1 swift-ring-builder account.builder add r1z2-127.0.0.2:6022/sdb2 1 swift-ring-builder account.builder add r1z3-127.0.0.3:6032/sdb3 1 swift-ring-builder account.builder add r1z4-127.0.0.4:6042/sdb4 1 swift-ring-builder account.builder rebalanceYou can expect the output from this command to produce the following. Note that 3 object rings are created in order to test storage policies and EC in the SAIO environment. The EC ring is the only one with all 8 devices. There are also two replication rings, one for 3x replication and another for 2x replication, but those rings only use 4 devices:
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6020R127.0.0.2:6020/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6030R127.0.0.3:6030/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6040R127.0.0.4:6040/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6020R127.0.0.2:6020/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6030R127.0.0.3:6030/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6040R127.0.0.4:6040/sdb4_"" with 1.0 weight got id 3 Reassigned 2048 (200.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0 Device d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb5_"" with 1.0 weight got id 1 Device d2r1z2-127.0.0.2:6020R127.0.0.2:6020/sdb2_"" with 1.0 weight got id 2 Device d3r1z2-127.0.0.2:6020R127.0.0.2:6020/sdb6_"" with 1.0 weight got id 3 Device d4r1z3-127.0.0.3:6030R127.0.0.3:6030/sdb3_"" with 1.0 weight got id 4 Device d5r1z3-127.0.0.3:6030R127.0.0.3:6030/sdb7_"" with 1.0 weight got id 5 Device d6r1z4-127.0.0.4:6040R127.0.0.4:6040/sdb4_"" with 1.0 weight got id 6 Device d7r1z4-127.0.0.4:6040R127.0.0.4:6040/sdb8_"" with 1.0 weight got id 7 Reassigned 6144 (600.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6011R127.0.0.1:6011/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6021R127.0.0.2:6021/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6031R127.0.0.3:6031/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6041R127.0.0.4:6041/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6012R127.0.0.1:6012/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6022R127.0.0.2:6022/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6032R127.0.0.3:6032/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6042R127.0.0.4:6042/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00Read more about Storage Policies and your SAIO Adding Storage Policies to an Existing SAIO
Verify the unit tests run:
$HOME/swift/.unittestsNote that the unit tests do not require any swift daemons running.
Start the “main” Swift daemon processes (proxy, account, container, and object):
startmain
(The “
Unable to increase file descriptor limit. Running as non-root?
” warnings are expected and ok.)The
startmain
script looks like the following:#!/bin/bash set -e swift-init main startGet an
X-Storage-Url
andX-Auth-Token
:curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0Check that you can
GET
account:curl -v -H 'X-Auth-Token: <token-from-x-auth-token-above>' <url-from-x-storage-url-above>Check that
swift
command provided by the python-swiftclient package works:swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing statVerify the functional tests run:
$HOME/swift/.functests(Note: functional tests will first delete everything in the configured accounts.)
Verify the probe tests run:
$HOME/swift/.probetests(Note: probe tests will reset your environment as they call
resetswift
for each test.)
If all doesn’t go as planned, and tests fail, or you can’t auth, or something doesn’t work, here are some good starting places to look for issues:
/var/log/syslog
,
but possibly in /var/log/messages
on e.g. Fedora – so that is a good first
place to look for errors (most likely python tracebacks).swift-object-server /etc/swift/object-server/1.conf
will start the
object server. If there are problems not showing up in syslog,
then you will likely see the traceback on startup./dev/log
is unavailable, or which
cannot rate limit (unit tests generate a lot of logs very quickly).
Open the file SWIFT_TEST_CONFIG_FILE
points to, and change the
value of fake_syslog
to True
.401 Unauthorized
when following Step 12 where
you check that you can GET
account, use sudo service memcached status
and check if memcache is running. If memcache is not running, start it using
sudo service memcached start
. Once memcache is running, rerun GET
account.Listed here are some “gotcha’s” that you may run into when using or testing your SAIO:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.