Ansible Roles¶
Documentation for roles included in system-config
There are two types of roles. Top-level roles, kept in the roles/
directory, are available to be used as roles in Zuul jobs. This
places some constraints on the roles, such as not being able to use
plugins. Add
roles:
- zuul: openstack-infra/system-config
to your job definition to source these roles.
Roles in playbooks/roles
are designed to be run on the
Infrastructure control-plane (i.e. from bridge.openstack.org
).
These roles are not available to be shared with Zuul jobs.
Role documentation¶
-
afs-release
¶ afs-release
Install the script and related bits and pieces for periodic release of various AFS volumes. This role is really only intended to be run on the mirror-update host, as it uses the ssh-key installed by that host to run vos release under -localauth on the remote AFS servers.
Role Variables
-
afsmon
¶ afsmon
Install the afsmon tool and related bits and pieces for periodic monitoring of AFS volumes. This role is really only intended to be run on the mirror-update host as we only need one instance of it running.
Role Variables
-
backup
¶ Configure a host to be backed up
This role setups a host to use
bup
for backup to any hosts in thebackup-server
group.A separate ssh key will be generated for root to connect to the backup server(s) and the host key for the backup servers will be accepted to the host.
The
bup
tool is installed and a cron job is setup to run the backup periodically.Note the
backup-server
role must run after this to create the user correctly on the backup server. This role sets a tuplebup_user
with the username and public key; thebackup-server
role uses this variable for each host in thebackup
group to initalise users.Role Variables
-
bup_username
¶ The username to connect to the backup server. If this is left undefined, it will be automatically set to
bup-$(hostname)
-
-
backup-server
¶ Setup backup server
This role configures backup server(s) in the
backup-server
group to accept backups from remote hosts.Note that the
backup
role must have run on each host in thebackup
group before this role. That role will create abup_user
tuple in the hostvars for for each host consisting of the required username and public key.Each required user gets a separate home directory in
/opt/backups
. Theirauthorized_keys
file is configured with the public key to allow the remote host to log in and only runbup
.Role Variables
-
base-repos
¶ Set basic repository sources
Role Variables
None
-
base-server
¶ Basic common server configuration
Role Variables
-
bastion_key_exclusive
¶
Default:True
Whether the bastion ssh key is the only key allowed to ssh in as root.
-
-
bazelisk-build
¶ Run bazelisk build
Runs bazelisk build with the specified targets.
Role Variables
-
bazelisk_targets
¶
Default:""
The bazelisk targets to build.
-
bazelisk_test_targets
¶
Default:""
The bazelisk targets to test.
bazelisk test
will only be run if this value is not the empty string.
-
bazelisk_executable
¶
Default:bazelisk
The path to the bazelisk executable.
-
zuul_work_dir
¶
Default:{{ ansible_user_dir }}/{{ zuul.project.src_dir}}
The working directory in which to run bazelisk.
-
-
cloud-launcher-cron
¶ Setup periodic runs of
run_cloud_launcher.sh
, which runs the cloud setup playbook against our clouds.Note that this runs in an independent cron beacuse we don’t need to run it as frequently as our normal ansible runs and this ansible process needs access to the all-clouds.yaml file which we don’t run the normal ansible runs with.
Role Variables
-
cloud_launcher_cron_interval
¶ -
cloud_launcher_cron_interval.
minute
¶
Default:0
-
cloud_launcher_cron_interval.
hour
¶
Default:*/1
-
cloud_launcher_cron_interval.
day
¶
Default:*
-
cloud_launcher_cron_interval.
month
¶
Default:*
-
cloud_launcher_cron_interval.
weekday
¶
Default:*
-
cloud_launcher_cron_interval.
cloud_launcher_disable_job
¶
Default:false
Prevent installation of cron job. This is only useful for CI jobs testing bridge.o.o so that the test host does not randomly run the script during CI tests that fall during the interval.
-
-
-
configure-kubectl
¶ Configure kube config files
Configure kubernetes files needed by kubectl.
Role Variables
-
kube_config_dir
¶
Default:/root/.kube
-
kube_config_owner
¶
Default:root
-
kube_config_group
¶
Default:root
-
kube_config_file
¶
Default:{{ kube_config_dir }}/config
-
kube_config_template
¶
-
-
configure-openstacksdk
¶ Configure openstacksdk files
Configure openstacksdk files needed by nodepool and ansible.
Role Variables
-
openstacksdk_config_dir
¶
Default:/etc/openstack
-
openstacksdk_config_owner
¶
Default:root
-
openstacksdk_config_group
¶
Default:root
-
openstacksdk_config_file
¶
Default:{{ openstacksdk_config_dir }}/clouds.yaml
-
openstacksdk_config_template
¶
-
-
disable-puppet-agent
¶ Disable the puppet-agent service on a host
Role Variables
None
-
edit-secrets-script
¶ This role installs a script called edit-secrets to /usr/local/bin that allows you to safely edit the secrets file without needing to manage gpg-agent yourself.
-
etherpad
¶ Run an Etherpad server.
-
exim
¶ Installs and configures the exim mail server
Role Variables
-
exim_aliases
¶
Default:{}
A dictionary with keys being the email alias and the value being the address or comma separated list of addresses.
-
exim_routers
¶
Default:[]
A list of additional exim routers to define.
-
exim_transports
¶
Default:[]
A list of additional exim transports to define.
-
exim_local_domains
¶
Default:"@"
Colon separated list of local domains.
-
exim_queue_interval
¶
Default:30m
How often should we run the queue.
-
exim_queue_run_max
¶
Default:5
Number of simultaneous queue runners.
-
exim_smtp_accept_max
¶
Default:null
The maximum number of simultaneous incoming SMTP calls that Exim will accept. If the value is set to zero, no limit is applied. However, it is required to be non-zero if exim.exim_smtp_accept_max_per_host is set.
-
exim_smtp_accept_max_per_host
¶
Default:null
Restrict the number of simultaneous IP connections from a single host (strictly, from a single IP address) to the Exim daemon. The option is expanded, to enable different limits to be applied to different hosts by reference to
$sender_host_address
. Once the limit is reached, additional connection attempts from the same host are rejected with error code 421. The option’s default value imposes no limit. If this option is set greater than zero, it is required that exim.exim_smtp_accept_max be non-zero.
-
-
gerrit
¶ Run Gerrit.
-
gitea
¶ Install, configure, and run Gitea.
-
gitea-git-repos
¶ Create git repos on a gitea server
-
haproxy
¶ Install, configure, and run a haproxy server.
-
install-ansible
¶ Install and configure Ansible on a host via pip
Role Variables
-
install_ansible_name
¶
Default:ansible
The name of the ansible package to install. To install from alternative sources, this can be a URL for a remote package; e.g. to install from upstream devel branch
git+https://github.com/ansible/ansible.git@devel
-
install_ansible_version
¶
Default:latest
The version of the library from install-ansible.install_ansible_name. Set this to empty (YAML
null
) if specifying versions via URL in install-ansible.install_ansible_name. The special value “latest” will ensurestate: latest
is set for the package and thus the latest version is always installed.
-
install_ansible_openstacksdk_name
¶
Default:openstacksdk
The name of the openstacksdk package to install. To install from alternative sources, this can be a URL for a remote package; e.g. to install from a gerrit change
git+https://git.openstack.org/openstack/openstacksdk@refs/changes/12/3456/1#egg=openstacksdk
-
install_ansible_openstacksdk_version
¶
Default:latest
The version of the library from install-ansible.install_ansible_openstacksdk_name. Set this to empty (YAML
null
) if specifying versions via install-ansible.install_ansible_openstacksdk_name. The special value “latest” will ensurestate: latest
is set for the package and thus the latest version is always installed.
-
install_ansible_ara_enable
¶
Default:false
Whether or not to install the ARA Records Ansible callback plugin
-
install_ansible_ara_name
¶
Default:ara
The name of the ARA package to install. To install from alternative sources, this can be a URL for a remote package.
-
install_ansible_ara_version
¶
Default:latest
Version of ARA to install. Set this to empty (YAML
null
) if specifying versions via URL in install-ansible.install_ansible_ara_name. The special value “latest” will ensurestate: latest
is set for the package and hence the latest version is always installed.
-
install_ansible_ara_config
¶
Default:{"database": "sqlite:////var/cache/ansible/ara.sqlite"}
A dictionary of key-value pairs to be added to the ARA configuration file
database: Connection string for the database (ex: mysql+pymysql://ara:password@localhost/ara)
For a list of available configuration options, see the ARA documentation
-
-
install-docker
¶ An ansible role to install docker in the OpenStack infra production environment
Role Variables
-
use_upstream_docker
¶
Default:True
By default this role adds repositories to install docker from upstream docker. Set this to False to use the docker that comes with the distro.
-
docker_update_channel
¶
Default:stable
Which update channel to use for upstream docker. The two choices are
stable
, which is the default and updates quarterly, andedge
which updates monthly.
-
-
install-kubectl
¶ Install kubectl
Role Variables
None
-
install-podman
¶ An ansible role to install podman in the OpenDev production environment
-
install-zookeeper
¶ An ansible role to install Zookeeper
Role Variables
-
iptables
¶ Install and configure iptables
Role Variables
-
iptables_allowed_hosts
¶
Default:[]
A list of dictionaries, each item in the list is a rule to add for a host/port combination. The format of the dictionary is:
-
iptables_allowed_hosts.
hostname
¶ The hostname to allow. It will automatically be resolved, and all IP addresses will be added to the firewall.
-
iptables_allowed_hosts.
protocol
¶ One of “tcp” or “udp”.
-
iptables_allowed_hosts.
port
¶ The port number.
-
-
iptables_public_tcp_ports
¶
Default:[]
A list of public TCP ports to open.
-
iptables_public_udp_ports
¶
Default:[]
A list of public UDP ports to open.
-
iptables_rules_v4
¶
Default:[]
A list of iptables v4 rules. Each item is a string containing the iptables command line options for the rule.
-
iptables_rules_v6
¶
Default:[]
A list of iptables v6 rules. Each item is a string containing the iptables command line options for the rule.
-
-
jitsi-meet
¶ Install, configure, and run jitsi-meet.
-
kerberos-client
¶ An ansible role to configure a kerberos client
Note
`k5start
is installed on Debuntu distributions, but is not part of RedHat distributions.Role Variables
-
kerberos_realm
¶ The realm for Kerberos authentication. You must set the realm. e.g.
MY.COMPANY.COM
. This will be the default realm.
-
kerberos_admin_server
¶
Default:{{ ansible_fqdn }}
The host where the administraion server is running. Typically this is the master Kerberos server.
-
kerberos_kdcs
¶
Default:[ {{ ansible_fqdn }} ]
A list of key distribution center (KDC) hostnames for the realm.
-
-
letsencrypt-acme-sh-install
¶ Install acme.sh client
This makes the acme.sh client available on the host.
Additionally a
driver.sh
script is installed to run the authentication procedure and parse output.Role Variables
-
letsencrypt_gid
¶
Default:unset
Unix group gid for the letsencrypt group which has permissions on the /etc/letsencrypt-certificates directory. If unset, uses system default. Useful if this conflicts with another role that assumes a gid value.
-
letsencrypt_account_email
¶
Default:undefined
The email address to register with accounts. Renewal mail and other info may be sent here. Must be defined.
-
-
letsencrypt-create-certs
¶ Generate letsencrypt certificates
This must run after the
letsencrypt-install-acme-sh
,letsencrypt-request-certs
andletsencrypt-install-txt-records
roles. It will run theacme.sh
process to create the certificates on the host.Role Variables
-
letsencrypt_self_sign_only
¶ If set to True, will locally generate self-signed certificates in the same locations the real script would, instead of contacting letsencrypt. This is set during gate testing as the authentication tokens are not available.
-
letsencrypt_use_staging
¶ If set to True will use the letsencrypt staging environment, rather than make production requests. Useful during initial provisioning of hosts to avoid affecting production quotas.
-
letsencrypt_certs
¶ The same variable as described in
letsencrypt-request-certs
.
-
-
letsencrypt-install-txt-record
¶ Install authentication records for letsencrypt
Install TXT records to the
acme.opendev.org
domain. This role runs only the adns server, and assumes ownership of the/var/lib/bind/zones/acme.opendev.org/zone.db
file. After installation the nameserver is refreshed.After this,
letsencrypt-create-certs
can run on each host to provision the certificates.Role Variables
-
acme_txt_required
¶ A global dictionary of TXT records to be installed. This is generated in a prior step on each host by the
letsencrypt-request-certs
role.
-
-
letsencrypt-request-certs
¶ Request certificates from letsencrypt
The role requests certificates (or renews expiring certificates, which is fundamentally the same thing) from letsencrypt for a host. This requires the
acme.sh
tool and driver which should have been installed by theletsencrypt-acme-sh-install
role.This role does not create the certificates. It will request the certificates from letsencrypt and populate the authentication data into the
acme_txt_required
variable. These values need to be installed and activated on the DNS server by theletsencrypt-install-txt-record
role; theletsencrypt-create-certs
will then finish the certificate provision process.Role Variables
-
letsencrypt_use_staging
¶ If set to True will use the letsencrypt staging environment, rather than make production requests. Useful during initial provisioning of hosts to avoid affecting production quotas.
-
letsencrypt_certs
¶ A host wanting a certificate should define a dictionary variable
letsencyrpt_certs
. Each key in this dictionary is a separate certificate to create (i.e. a host can create multiple separate certificates). Each key should have a list of hostnames valid for that certificate. The certificate will be named for the first entry.For example:
letsencrypt_certs: hostname-main-cert: - hostname01.opendev.org - hostname.opendev.org hostname-secondary-cert: - foo.opendev.org
will ultimately result in two certificates being provisioned on the host in
/etc/letsencrypt-certs/hostname01.opendev.org
and/etc/letsencrypt-certs/foo.opendev.org
.Note the creation role
letsencrypt-create-certs
will call a handlerletsencrypt updated {{ key }}
(for example,letsencrypt updated hostname-main-cert
) when that certificate is created or updated. Because Ansible errors if a handler is called with no listeners, you must define a listener for event.letsencrypt-create-certs
hashandlers/main.yaml
where handlers can be defined. Since handlers reside in a global namespace, you should choose an appropriately unique name.Note that each entry will require a
CNAME
pointing the ACME challenge domain to the TXT record that will be created in the signing domain. For example above, the following records would need to be pre-created:_acme-challenge.hostname01.opendev.org. IN CNAME acme.opendev.org. _acme-challenge.hostname.opendev.org. IN CNAME acme.opendev.org. _acme-challenge.foo.opendev.org. IN CNAME acme.opendev.org.
-
-
logrotate
¶ Add log rotation file
Note
This role does not manage the
logrotate
package or configuration directory, and it is assumed to be installed and available.This role installs a log rotation file in
/etc/logrotate.d/
for a given file.For information on the directives see
logrotate.conf(5)
. This is not an exhaustive list of directives (contributions are welcome).** Role Variables **
-
logrotate_file_name
¶ The log file on disk to rotate
-
logrotate_config_file_name
¶
Default:Unique name based on :zuul:rolevar::`logrotate.logrotate_file_name`
The name of the configuration file in
/etc/logrotate.d
-
logrotate_compress
¶
Default:yes
-
logrotate_copytruncate
¶
Default:yes
-
logrotate_delaycompress
¶
Default:yes
-
logrotate_missingok
¶
Default:yes
-
logrotate_rotate
¶
Default:7
-
logrotate_frequency
¶
Default:daily
One of
hourly
,daily
,weekly
,monthly
,yearly
orsize
.If choosing
size
, :zuul:rolevar::logrotate.logrotate_size must be specified
-
logrotate_size
¶
Default:None
Size; e.g. 100K, 10M, 1G. Only when :zuul:rolevar::logrotate.logrotate_frequency is
size
.
-
logrotate_notifempty
¶
Default:yes
-
-
master-nameserver
¶ Configure a hidden master nameserver
This role installs and configures bind9 to be a hidden master nameserver.
Role Variables
-
tsig_key
¶
Type: dict The TSIG key used to control named.
-
tsig_key{}.
algorithm
¶ The algorithm used by the key.
-
tsig_key{}.
secret
¶ The secret portion of the key.
-
-
dnssec_keys
¶
Type: dict This is a dictionary of DNSSEC keys. Each entry is a dnssec key, where the dictionary key is the dnssec key id and the value is the a dictionary with the following contents:
-
dnssec_keys{}.
zone
¶ The name of the zone for this key.
-
dnssec_keys{}.
public
¶ The public portion of this key.
-
dnssec_keys{}.
private
¶ The private portion of this key.
-
-
dns_repos
¶
Type: list A list of zone file repos to check out on the server. Each item in the list is a dictionary with the following keys:
-
dns_repos[].
name
¶ The name of the repo.
-
dns_repos[].
url
¶ The URL of the git repository.
-
dns_repos[].
refspec
¶ Add an additional refspec passed to the git checkout
-
dns_repos[].
version
¶ An additional version passed to the git checkout
-
-
dns_zones
¶
Type: list A list of zones that should be served by named. Each item in the list is a dictionary with the following keys:
-
dns_zones[].
name
¶ The name of the zone.
-
dns_zones[].
source
¶ The repo name and path of the directory containing the zone file. For example if a repo was provided to master-nameserver.dns_repos.name with the name
example.com
, and within that repo, thezone.db
file was located atzones/example_com/zone.db
, then the value here should beexample.com/zones/example_com
.
-
dns_zones[].
unmanaged
¶
Default:False
Type: bool If
True
the zone is considered unmanaged. Thesource
file will be put in place if it does not exist, but will otherwise be left alone.
-
-
dns_notify
¶
Type: list A list of IP addresses of nameservers which named should notify on updates.
-
-
mirror
¶ Configure an opendev mirror
This role installs and configures a mirror node
Role Variables
-
mirror-update
¶ mirror-update
This role sets up the
mirror-update
host, which does the periodic sync of upstream mirrors to the AFS volumes.It is not intended to be a particularly generic or flexible role, as there is usually only one instance of the mirror-update host (to avoid conflicting updates).
At this stage, it handles the mirrors that are updated by
rsync
only. It is expected that it will grow to cover mirroring other volumes that are currently done by the legacyopenstack.org
host and managed by puppet.Role Variables
-
nameserver
¶ Configure an authoritative nameserver
This role installs and configures nsd to be an authoritative nameserver.
Role Variables
-
tsig_key
¶
Type: dict The TSIG key used to authenticate connections between nameservers.
-
tsig_key{}.
algorithm
¶ The algorithm used by the key.
-
tsig_key{}.
secret
¶ The secret portion of the key.
-
-
dns_zones
¶
Type: list A list of zones that should be served by named. Each item in the list is a dictionary with the following keys:
-
dns_zones[].
name
¶ The name of the zone.
-
dns_zones[].
source
¶ The repo name and path of the directory containing the zone file. For example if a repo was provided to master-nameserver.dns_repos.name with the name
example.com
, and within that repo, thezone.db
file was located atzones/example_com/zone.db
, then the value here should beexample.com/zones/example_com
.
-
-
dns_master
¶ The IP addresses of the master nameserver.
-
-
nodepool-base
¶ nodepool base setup
Role Variables
-
nodepool_base_install_zookeeper
¶ Install zookeeper to the node. This is not expected to be used in production, where the nodes would connect to an externally configured zookeeper instance. It can be useful for basic loopback tests in the gate, however.
-
-
nodepool-base-legacy
¶ Minimal nodepool requirements for mixed puppet/ansible deployment.
Create minimal nodepool requirements so that we can manage nodepool servers with ansible and puppet while we transition.
Role Variables
None
-
nodepool-builder
¶ Deploy nodepool-builder container
Role Variables
-
nodepool_builder_container_tag
¶
Default:unset
Override tag for container deployment
-
-
openafs-client
¶ An ansible role to configure an OpenAFS client
Note
This role uses system packages where available, but for platforms or architectures where they are not available will require external builds. Defaults will pick external packages from OpenStack Infra builds, but you should evaluate if this is suitable for your environment.
This role configures the host to be an OpenAFS client. Because OpenAFS is very reliant on distribution internals, kernel versions and host architecture this role has limited platform support. Currently supported are
Debian family with system packages available
Ubuntu Xenial with ARM64, with external 1.8 series packages
CentOS 7 with external packages
Role Variables
-
openafs_client_cell
¶
Default:openstack.org
The default cell.
-
openafs_client_cache_size
¶
Default:500000
The OpenAFS client cache size, in kilobytes.
-
openafs_client_cache_directory
¶
Default:/var/cache/openafs
The directory to store the OpenAFS cache files.
-
openafs_client_yum_repo_url
¶
Default:``https://tarballs.openstack.org/project-config/package-afs-centos7``
The URL to a yum/dnf repository with the OpenAFS client RPMs. These are assumed to be created from the
.spec
file included in the OpenAFS distribution.
-
openafs_client_yum_repo_gpg_check
¶
Default:no
Enable or disable gpg checking for
openafs_yum_repo_url
-
openafs_client_apt_repo
¶
Default:ppa:openstack-ci-core/openafs-arm64
Source string for APT repository for Debian family hosts requiring external packages (currently ARM64)
-
pip3
¶ Install system packages for python3 pip and virtualenv
Role Variables
None
-
puppet-install
¶ Install puppet on a host
Note
This role uses
puppetlabs
versions where available in preference to system packages.This roles installs puppet on a host
Role Variables
-
puppet_install_version
¶
Default:3
The puppet version to install. Platform support for various version varies.
-
puppet_install_system_config_modules
¶
Default:yes
If we should clone and run install_modules.sh from OpenDev
system-config
repository to populate required puppet modules on the host.
-
-
registry
¶ Install, configure, and run a Docker registry.
-
root-keys
¶ Write out root SSH private key
Role Variables
-
root_rsa_key
¶ The root key to place in
/root/.ssh/id_rsa
-
-
set-hostname
¶ Set hostname
Remove
cloud-init
and statically set the hostname, hosts and mailnameRole Variables
None
-
snmpd
¶ Installs and configures the net-snmp daemon
-
static
¶ Configure an static webserver
This role installs and configures a static webserver to serve content published in AFS
Role Variables
-
timezone
¶ Configures timezone to Etc/UTC and restarts crond when changed.
Role Variables
None
-
unbound
¶ Installs and configures the unbound DNS resolver
-
users
¶ Configure users on a server
Configure users on a server. Users are given sudo access
Role Variables
-
all_users
¶
Default:{}
Dictionary of all users. Each user needs a
uid
,gid
andkey
-
base_users
¶
Default:[]
Users to install on all hosts
-
extra_users
¶
Default:[]
Extra users to install on a specific host or group
-
disabled_users
¶
Default:[]
Users who should be removed from all hosts
-
-
vos-release
¶ vos release with localauth
Install a user and script to do remote
vos release
withlocalauth
authentication. This can avoid kerberos or AFS timeouts.This relies on
vos_release_keypair
which is expected to be a single keypair set previously by hosts in the “mirror-update” group. It will allow that keypair to run/usr/local/bin/vos_release.sh
, which filters the incoming command. Releases are expected to be triggered on the update host with:ssh -i /root/.ssh/id_vos_release afs01.dfw.openstack.org vos release <mirror>.<volume>
Future work, if required
Allow multiple hosts to call the release script (i.e. handle multiple keys).
Implement locking within
vos_release.sh
script to prevent too many simulatenous releases.
Role Variables
-
vos_release_keypair
¶ The authorized key to allow to run the
/usr/local/bin/vos_release.sh
script
-
zuul-preview
¶ Install, configure, and run zuul-preview.