Story
As an operator I would like to build a small cloud with both virtual and bare
metal instances or add bare metal provisioning to my existing small or medium
scale single-site OpenStack cloud. The expected number of bare metal machines
is less than 100, and the rate of provisioning and unprovisioning is expected
to be low. All users of my cloud are trusted by me to not conduct malicious
actions towards each other or the cloud infrastructure itself.
As a user I would like to occasionally provision bare metal instances through
the Compute API by selecting an appropriate Compute flavor. I would like
to be able to boot them from images provided by the Image service or from
volumes provided by the Volume service.
Node roles
An OpenStack installation in this guide has at least these three types of
nodes:
A controller node hosts the control plane services.
A compute node runs the virtual machines and hosts a subset of Compute
and Networking components.
A block storage node provides persistent storage space for both virtual
and bare metal nodes.
The compute and block storage nodes are configured as described in the
installation guides of the Compute service and the
Volume service
respectively. The controller nodes host the Bare Metal service components.
Networking
The networking architecture will highly depend on the exact operating
requirements. This guide expects the following existing networks:
control plane, storage and public. Additionally, two more networks
will be needed specifically for bare metal provisioning: bare metal and
management.
Control plane network
The control plane network is the network where OpenStack control plane
services provide their public API.
The Bare Metal API will be served to the operators and to the Compute service
through this network.
Public network
The public network is used in a typical OpenStack deployment to create
floating IPs for outside access to instances. Its role is the same for a bare
metal deployment.
Note
Since, as explained below, bare metal nodes will be put on a flat provider
network, it is also possible to organize direct access to them, without
using floating IPs and bypassing the Networking service completely.
Management network
Management network is an independent network on which BMCs of the bare
metal nodes are located.
The ironic-conductor
process needs access to this network. The tenants
of the bare metal nodes must not have access to it.
Controllers
A controller hosts the OpenStack control plane services as described in the
control plane design guide. While this architecture allows using
controllers in a non-HA configuration, it is recommended to have at least
three of them for HA. See HA and Scalability for more details.
Shared services
A controller also hosts two services required for the normal operation
of OpenStack:
Database service (MySQL/MariaDB is typically used, but other
enterprise-grade database solutions can be used as well).
All Bare Metal service components need access to the database service.
Message queue service (RabbitMQ is typically used, but other
enterprise-grade message queue brokers can be used as well).
Both Bare Metal API (WSGI application or ironic-api
process) and
the ironic-conductor
processes need access to the message queue service.
The Bare Metal Introspection service does not need it.
Note
These services are required for all OpenStack services. If you’re adding
the Bare Metal service to your cloud, you may reuse the existing
database and messaging queue services.
Storage
If your hardware and its bare metal driver support
booting from remote volumes, please check the driver documentation for
information on how to enable it. It may include routing management and/or
bare metal networks to the storage network.
In case of the standard PXE boot, booting from remote volumes is done
via iPXE. In that case, the Volume storage backend must support iSCSI
protocol, and the bare metal network has to have a route to the storage
network. See Boot From Volume for more details.