This architecture example provides layer-2 connectivity between instances and the physical network infrastructure using VLAN (802.1q) tagging. It supports one untagged (flat) network and up to 4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends on the physical network infrastructure. For more information on provider networks, see Provider networks.
Warning
Linux distributions often package older releases of Open vSwitch that can introduce issues during operation with the Networking service. We recommend using at least the latest long-term stable (LTS) release of Open vSwitch for the best experience and support from Open vSwitch. See http://www.openvswitch.org for available releases and the installation instructions for
One controller node with the following components:
Two compute nodes with the following components:
Note
Larger deployments typically deploy the DHCP and metadata agents on a subset of compute nodes to increase performance and redundancy. However, too many agents can overwhelm the message bus. Also, to further simplify any deployment, you can omit the metadata agent and use a configuration drive to provide metadata to instances.
The following figure shows components and connectivity for one untagged (flat) network. In this particular case, the instance resides on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only contains a DHCP namespace with a port on the OVS integration bridge.
The following figure describes virtual connectivity among components for two tagged (VLAN) networks. Essentially, all networks use a single OVS integration bridge with different internal VLAN tags. The internal VLAN tags almost always differ from the network VLAN assignment in the Networking service. Similar to the untagged network case, the DHCP agent may reside on a different compute node.
Note
These figures omit the controller node because it does not handle instance network traffic.
Use the following example configuration as a template to deploy provider networks in your environment.
Install the Networking service components that provide the
neutron-server
service and ML2 plug-in.
In the neutron.conf
file:
Configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and
Configuration Reference for your OpenStack
release to obtain the appropriate additional configuration for the
[DEFAULT]
, [database]
, [keystone_authtoken]
, [nova]
, and
[agent]
sections.
Disable service plug-ins because provider networks do not require any. However, this breaks portions of the dashboard that manage the Networking service. See the Queens Install Tutorials and Guides for more information.
[DEFAULT]
service_plugins =
Enable two DHCP agents per network so both compute nodes can provide DHCP service provider networks.
[DEFAULT]
dhcp_agents_per_network = 2
If necessary, configure MTU.
In the ml2_conf.ini
file:
Configure drivers and network types:
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security
Configure network mappings:
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
Note
The tenant_network_types
option contains no value because the
architecture does not support self-service networks.
Note
The provider
value in the network_vlan_ranges
option lacks VLAN
ID ranges to support use of arbitrary VLAN IDs.
Populate the database.
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Start the following services:
Install the Networking service OVS layer-2 agent, DHCP agent, and metadata agent.
Install OVS.
In the neutron.conf
file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and
Configuration Reference for your OpenStack
release to obtain the appropriate additional configuration for the
[DEFAULT]
, [database]
, [keystone_authtoken]
, [nova]
, and
[agent]
sections.
In the openvswitch_agent.ini
file, configure the OVS agent:
[ovs]
bridge_mappings = provider:br-provider
[securitygroup]
firewall_driver = iptables_hybrid
In the dhcp_agent.ini
file, configure the DHCP agent:
[DEFAULT]
interface_driver = openvswitch
enable_isolated_metadata = True
force_metadata = True
Note
The force_metadata
option forces the DHCP agent to provide
a host route to the metadata service on 169.254.169.254
regardless of whether the subnet contains an interface on a
router, thus maintaining similar and predictable metadata behavior
among subnets.
In the metadata_agent.ini
file, configure the metadata agent:
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
The value of METADATA_SECRET
must match the value of the same option
in the [neutron]
section of the nova.conf
file.
Start the following services:
Create the OVS provider bridge br-provider
:
$ ovs-vsctl add-br br-provider
Add the provider network interface as a port on the OVS provider
bridge br-provider
:
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
Replace PROVIDER_INTERFACE
with the name of the underlying interface
that handles provider networks. For example, eth1
.
Start the following services:
Source the administrative project credentials.
Verify presence and operation of the agents:
$ openstack network agent list
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | True | UP | neutron-openvswitch-agent |
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | True | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
The configuration supports one flat or multiple VLAN provider networks. For simplicity, the following procedure creates one flat provider network.
Source the administrative project credentials.
Create a flat network.
$ openstack network create --share --provider-physical-network provider \
--provider-network-type flat provider1
+---------------------------+-----------+-
| Field | Value |
+---------------------------+-----------+
| admin_state_up | UP |
| mtu | 1500 |
| name | provider1 |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| router:external | Internal |
| shared | True |
| status | ACTIVE |
+---------------------------+-----------+
Note
The share
option allows any project to use this network. To limit
access to provider networks, see Role-Based Access Control (RBAC).
Note
To create a VLAN network instead of a flat network, change
--provider:network_type flat
to --provider-network-type vlan
and add --provider-segment
with a value referencing
the VLAN ID.
Create a IPv4 subnet on the provider network.
$ openstack subnet create --subnet-range 203.0.113.0/24 --gateway 203.0.113.1 \
--network provider1 --allocation-pool start=203.0.113.11,end=203.0.113.250 \
--dns-nameserver 8.8.4.4 provider1-v4
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| allocation_pools | 203.0.113.11-203.0.113.250 |
| cidr | 203.0.113.0/24 |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| ip_version | 4 |
| name | provider1-v4 |
+-------------------+----------------------------+
Important
Enabling DHCP causes the Networking service to provide DHCP which can
interfere with existing DHCP services on the physical network
infrastructure. Use the --no-dhcp
option to have the subnet managed
by existing DHCP services.
Create a IPv6 subnet on the provider network.
$ openstack subnet create --subnet-range fd00:203:0:113::/64 --gateway fd00:203:0:113::1 \
--ip-version 6 --ipv6-address-mode slaac --network provider1 \
--dns-nameserver 2001:4860:4860::8844 provider1-v6
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff |
| cidr | fd00:203:0:113::/64 |
| dns_nameservers | 2001:4860:4860::8844 |
| enable_dhcp | True |
| gateway_ip | fd00:203:0:113::1 |
| ip_version | 6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | provider1-v6 |
+-------------------+------------------------------------------------------+
Note
The Networking service uses the layer-3 agent to provide router advertisement. Provider networks rely on physical network infrastructure for layer-3 services rather than the layer-3 agent. Thus, the physical network infrastructure must provide router advertisement on provider networks for proper operation of IPv6.
On each compute node, verify creation of the qdhcp
namespace.
# ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401c
Source a regular (non-administrative) project credentials.
Create the appropriate security group rules to allow ping
and SSH
access instances using the network.
$ openstack security group rule create --proto icmp default
+------------------+-----------+
| Field | Value |
+------------------+-----------+
| direction | ingress |
| ethertype | IPv4 |
| protocol | icmp |
| remote_ip_prefix | 0.0.0.0/0 |
+------------------+-----------+
$ openstack security group rule create --ethertype IPv6 --proto ipv6-icmp default
+-----------+-----------+
| Field | Value |
+-----------+-----------+
| direction | ingress |
| ethertype | IPv6 |
| protocol | ipv6-icmp |
+-----------+-----------+
$ openstack security group rule create --proto tcp --dst-port 22 default
+------------------+-----------+
| Field | Value |
+------------------+-----------+
| direction | ingress |
| ethertype | IPv4 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_ip_prefix | 0.0.0.0/0 |
+------------------+-----------+
$ openstack security group rule create --ethertype IPv6 --proto tcp --dst-port 22 default
+------------------+-----------+
| Field | Value |
+------------------+-----------+
| direction | ingress |
| ethertype | IPv6 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
+------------------+-----------+
Launch an instance with an interface on the provider network. For example, a CirrOS image using flavor ID 1.
$ openstack server create --flavor 1 --image cirros \
--nic net-id=NETWORK_ID provider-instance1
Replace NETWORK_ID
with the ID of the provider network.
Determine the IPv4 and IPv6 addresses of the instance.
$ openstack server list
+--------------------------------------+--------------------+--------+------------------------------------------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+--------------------+--------+------------------------------------------------------------+------------+
| 018e0ae2-b43c-4271-a78d-62653dd03285 | provider-instance1 | ACTIVE | provider1=203.0.113.13, fd00:203:0:113:f816:3eff:fe58:be4e | cirros |
+--------------------------------------+--------------------+--------+------------------------------------------------------------+------------+
On the controller node or any host with access to the provider network,
ping
the IPv4 and IPv6 addresses of the instance.
$ ping -c 4 203.0.113.13
PING 203.0.113.13 (203.0.113.13) 56(84) bytes of data.
64 bytes from 203.0.113.13: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.13: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.13: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.13: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.13 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
$ ping6 -c 4 fd00:203:0:113:f816:3eff:fe58:be4e
PING fd00:203:0:113:f816:3eff:fe58:be4e(fd00:203:0:113:f816:3eff:fe58:be4e) 56 data bytes
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=1 ttl=64 time=1.25 ms
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=2 ttl=64 time=0.683 ms
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=3 ttl=64 time=0.762 ms
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=4 ttl=64 time=0.486 ms
--- fd00:203:0:113:f816:3eff:fe58:be4e ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.486/0.796/1.253/0.282 ms
Obtain access to the instance.
Test IPv4 and IPv6 connectivity to the Internet or other external network.
The following sections describe the flow of network traffic in several common scenarios. North-south network traffic travels between an instance and external network such as the Internet. East-west network traffic travels between instances on the same or different networks. In all scenarios, the physical network infrastructure handles switching and routing among provider networks and external networks such as the Internet. Each case references one or more of the following components:
The following steps involve compute node 1.
veth
pair.veth
pair.int-br-provider
patch port (6) forwards
the packet to the OVS provider bridge phy-br-provider
patch port (7).The following steps involve the physical network infrastructure:
Note
Return traffic follows similar steps in reverse.
Instances on the same network communicate directly between compute nodes containing those instances.
The following steps involve compute node 1:
veth
pair.veth
pair.int-br-provider
patch port (6) forwards
the packet to the OVS provider bridge phy-br-provider
patch port (7).The following steps involve the physical network infrastructure:
The following steps involve compute node 2:
phy-br-provider
patch port (14) forwards the
packet to the OVS integration bridge int-br-provider
patch port (15).veth
pair.Note
Return traffic follows similar steps in reverse.
Instances communicate via router on the physical network infrastructure.
Note
Both instances reside on the same compute node to illustrate how VLAN tagging enables multiple logical layer-2 networks to use the same physical layer-2 network.
The following steps involve the compute node:
veth
pair.veth
pair.int-br-provider
patch port (6) forwards
the packet to the OVS provider bridge phy-br-provider
patch port (7).The following steps involve the physical network infrastructure:
The following steps involve the compute node:
phy-br-provider
patch port (18) forwards the
packet to the OVS integration bridge int-br-provider
patch port (19).veth
pair.Note
Return traffic follows similar steps in reverse.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.