5.15. Message Queue Performance¶
- status
ready
- version
1.0
- Abstract
This document describes a test plan for measuring performance of OpenStack message bus. The measurement covers message queue and oslo.messaging library.
5.15.1. Test Plan¶
5.15.1.1. Test Environment¶
This section describes the setup for message queue testing. It can be either a single (all-in-one) or a multi-node installation.
A single-node setup requires just one node to be up and running. It has both compute and controller roles and all OpenStack services run on this node. This setup does not support hardware scaling or workload distribution tests.
- A basic multi-node setup with RabbitMQ or ActiveMQ comprises 5 physical nodes:
One node for a compute node. This node simulates activity which is typical for OpenStack compute components.
One node for a controller node. This node simulates activity which is typical for OpenStack control plane services.
Three nodes are allocated for the MQ cluster.
When using ZeroMQ, the basic multi-node setup can be reduced to two physical nodes.
One node for a compute node as above.
One node for a controller node. This node also acts as a Redis host for match making purposes.
5.15.1.1.1. Preparation¶
RabbitMQ Installation and Configuration
Install RabbitMQ server package:
sudo apt-get install rabbitmq-server
Configure RabbitMQ on each node
/etc/rabbitmq/rabbitmq.config
:
[
{rabbit, [
{cluster_partition_handling, autoheal},
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
{default_vhost, <<"/">>},
{log_levels, [{connection,info}]},
{mnesia_table_loading_timeout, 10000},
{tcp_listen_options, [
binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false},
{keepalive, true}
]},
{default_user, <<"stackrabbit">>},
{default_pass, <<"password">>}
]},
{kernel, [
{inet_default_connect_options, [{nodelay,true}]},
{inet_dist_listen_max, 41055},
{inet_dist_listen_min, 41055},
{net_ticktime, 10}
]}
,
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
Stop RabbitMQ on nodes 2 and 3:
sudo service rabbitmq-server stop
Make Erlang cookies on nodes 2 and 3 the same as on node 1:
/var/lib/rabbitmq/.erlang.cookie
Start RabbitMQ server:
sudo service rabbitmq-server start
Stop RabbitMQ services, but leave Erlang:
sudo rabbitmqctl stop_app
Join nodes 2 and 3 nodes to node 1:
rabbitmqctl join_cluster rabbit@node-1
Start app on nodes 2 and 3:
sudo rabbitmqctl start_app
Add needed user:
sudo rabbitmqctl add_user stackrabbit password
sudo rabbitmqctl set_permissions stackrabbit ".*" ".*" ".*"
ActiveMQ Installation and Configuration
This section describes installation and configuration steps for an ActiveMQ message queue implementation. ActiveMQ is based on Java technologies so it requires a Java runtime. Actual performance will depend on the Java version as well as the hardware specification. The following steps should be performed for an ActiveMQ installation:
Install Java on nodes node-1, node-2 and node-3:
sudo apt-get install default-jre
Download the latest ActiveMQ binary:
wget http://www.eu.apache.org/dist/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz
Unzip the archive:
tar zxvf apache-activemq-5.12.0-bin.tar.gz
Install everything needed for ZooKeeper:
download ZK binaries:
wget http://www.eu.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
unzip the archive:
tar zxvf zookeeper-3.4.6.tar.gz
create
/home/ubuntu/zookeeper-3.4.6/conf/zoo.cfg
file:
tickTime=2000
dataDir=/home/ubuntu/zookeeper-3.4.6/data
dataLogDir=/home/ubuntu/zookeeper-3.4.6/logs
clientPort=2181
initLimit=10
syncLimit=5
server.1=10.4.1.107:2888:3888
server.2=10.4.1.119:2888:3888
server.3=10.4.1.111:2888:3888
Note
Here 10.4.1.x are the IP addresses of the ZooKeeper nodes where ZK is installed. ZK will be run in cluster mode with majority voting, so at least 3 nodes are required.
tickTime=2000
dataDir=/home/ubuntu/zookeeper-3.4.6/data
dataLogDir=/home/ubuntu/zookeeper-3.4.6/logs
clientPort=2181
initLimit=10
syncLimit=5
server.1=10.4.1.107:2888:3888
server.2=10.4.1.119:2888:3888
server.3=10.4.1.111:2888:3888
* create dataDir and dataLogDir directories
* for each MQ node create a myid file in dataDir with the id of the
server and nothing else. For node-1 the file will contain one line
with 1, node-2 with 2, and node-3 with 3.
* start ZooKeeper (on each node): ``./zkServer.sh start``
* check ZK status with: ``./zkServer.sh status``
* Configure ActiveMQ (apache-activemq-5.12.0/conf/activemq.xml file - set
the hostname parameter to the node address)
<broker brokerName="broker" ... >
...
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="10.4.1.107:2181,10.4.1.111:2181,10.4.1.119:2181"
zkPassword="password"
zkPath="/activemq/leveldb-stores"
hostname="10.4.1.107"
/>
</persistenceAdapter>
<plugins>
<simpleAuthenticationPlugin>
<users>
<authenticationUser username="stackrabbit" password="password"
groups="users,guests,admins"/>
</users>
</simpleAuthenticationPlugin>
</plugins>
...
</broker>
After ActiveMQ is installed and configured it can be started with the command:
:command:./activemq start or ./activemq console
for a foreground process.
Oslo.messaging ActiveMQ Driver
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are already merged to the upstream repository. The relevant changes can be found in the amqp10-driver-implementation topic.
To run ActiveMQ even on the most basic all-in-one topology deployment the following requirements need to be satisfied:
Java JRE must be installed in the system. The Java version can be checked with the command
java -version
. If java is not installed an error message will appear. Java can be installed with the following command:sudo apt-get install default-jre
ActiveMQ binaries should be installed in the system. See http://activemq.apache.org/getting-started.html for installation instructions. The latest stable version is currently http://apache-mirror.rbc.ru/pub/apache/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz.
To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python libraries need to be installed:
pip install "pyngus$>=$1.0.0,$<$2.0.0"
pip install python-qpid-proton
All OpenStack projects configuration files containing the line
rpc_backend = rabbit
need to be modified to replace this line withrpc_backend = amqp
, and then all the services need to be restarted.
ZeroMQ Installation
This section describes installation steps for ZeroMQ. ZeroMQ (also ZMQ or 0MQ) is an embeddable networking library but acts like a concurrency framework. Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesn’t have any central brokers in oslo.messaging. Instead, each host (running OpenStack services) is both a ZeroMQ client and a server. As a result, each host needs to listen to a certain TCP port for incoming connections and directly connect to other hosts simultaneously.
To set up ZeroMQ, only one step needs to be performed.
Install python bindings for ZeroMQ. All necessary packages will be installed as dependencies:
sudo apt-get install python-zmq
Note
python-zmq version should be at least 14.0.1
python-zmq Depends: <python:any> python Depends: python Depends: python Depends: libc6 Depends: libzmq3
Oslo.messaging ZeroMQ Driver
All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are already merged to the upstream repository. You can find the relevant changes in the zmq-patterns-usage topic.
To run ZeroMQ on the most basic all-in-one topology deployment the following requirements need to be satisfied:
Python ZeroMQ bindings must be installed in the system.
Redis binaries should be installed in the system. See http://redis.io/download for instructions and details.
Note
The following changes need to be applied to all OpenStack project configuration files.
To enable the driver, in the section [DEFAULT] of each configuration file, the ‘rpc_backend’ flag must be set to ‘zmq’ and the ‘rpc_zmq_host’ flag must be set to the hostname of the node.
[DEFAULT] rpc_backend = zmq rpc_zmq_host = myopenstackserver.example.com
Set Redis as a match making service.
[DEFAULT] rpc_zmq_matchmaker = redis [matchmaker_redis] host = 127.0.0.1 port = 6379 password = None
Running ZeroMQ on a multi-node setup
The process of setting up oslo.messaging with ZeroMQ on a multi-node environment is very similar to the all-in-one installation.
On each node
rpc_zmq_host
should be set to its FQDN.Redis-server should be up and running on a controller node or a separate host. Redis can be used with master-slave replication enabled, but currently the oslo.messaging ZeroMQ driver does not support Redis Sentinel, so it is not yet possible to achieve high availability, automatic failover, and fault tolerance.
The
host
parameter in section[matchmaker_redis]
should be set to the IP address of a host which runs a master Redis instance, e.g.[matchmaker_redis] host = 10.0.0.3 port = 6379 password = None
5.15.1.1.2. Environment description¶
The environment description includes hardware specification of servers, network parameters, operation system and OpenStack deployment characteristics.
5.15.1.1.2.1. Hardware¶
This section contains list of all types of hardware nodes.
Parameter |
Value |
Comments |
model |
e.g. Supermicro X9SRD-F |
|
CPU |
e.g. 6 x Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz |
5.15.1.1.2.2. Network¶
This section contains list of interfaces and network parameters. For complicated cases this section may include topology diagram and switch parameters.
Parameter |
Value |
Comments |
card model |
e.g. Intel |
|
driver |
e.g. ixgbe |
|
speed |
e.g. 10G or 1G |
5.15.1.1.2.3. Software¶
This section describes installed software.
Parameter |
Value |
Comments |
OS |
e.g. Ubuntu 14.04.3 |
|
oslo.messaging |
e.g. 4.0.0 |
|
MQ Server |
e.g. RabbitMQ 3.5.6 |
|
HA mode |
e.g. Cluster |
5.15.1.2. Test Case 1: RPC Call Throughput Test¶
5.15.1.2.1. Description¶
This test measures the aggregate throughput of a MQ layer including oslo.messaging library. The test is done for RPC call messages only. Message sizes are different with distribution modelled by data collected from live environment.
5.15.1.2.2. List of performance metrics¶
Test case result is series of following measurements done at different numbers of simultaneous threads. The output may be shown in table form and/or as chart showing dependency of parameters from number of threads.
Priority |
Value |
Measurement Units |
Description |
---|---|---|---|
1 |
Throughput |
msg/sec |
Number of messages per second |
2 |
Variance |
msg/sec |
Throughput variance over time |
2 |
Latency |
ms |
The latency in message processing |
5.15.1.3. Test Case 2: RPC Cast Throughput Test¶
5.15.1.3.1. Description¶
This test measures the aggregate throughput of a MQ layer including oslo.messaging library. The test is done for RPC cast messages only. Message sizes are different with distribution modelled by data collected from live environment.
5.15.1.3.2. List of performance metrics¶
Test case result is series of following measurements done at different numbers of simultaneous threads. The output may be shown in table form and/or as chart showing dependency of parameters from number of threads.
Priority |
Value |
Measurement Units |
Description |
---|---|---|---|
1 |
Throughput |
msg/sec |
Number of messages per second |
2 |
Variance |
msg/sec |
Throughput variance over time |
2 |
Latency |
ms |
The latency in message processing |
5.15.1.4. Test Case 3: Notification Throughput Test¶
5.15.1.4.1. Description¶
This test measures the aggregate throughput of a MQ layer including oslo.messaging library. The test is done for Notification messages only. Message sizes are different with distribution modelled by data collected from live environment.
5.15.1.4.2. List of performance metrics¶
Test case result is series of following measurements done at different numbers of simultaneous threads. The output may be shown in table form and/or as chart showing dependency of parameters from number of threads.
Priority |
Value |
Measurement Units |
Description |
---|---|---|---|
1 |
Throughput |
msg/sec |
Number of messages per second |
2 |
Variance |
msg/sec |
Throughput variance over time |
2 |
Latency |
ms |
The latency in message processing |
5.15.2. Tools¶
This section contains tools that can be used to perform the test plan.
5.15.2.1. Oslo.messaging Simulator¶
This section describes how to perform Message Queue Performance with Oslo.messaging Simulator tool.
5.15.2.1.1. Test environment preparation¶
To perform the test plan you will need to install oslo.messaging simulator.
The simulator tool depends on SciPy library which requires some mathematical packages to be installed into system.
To install on CentOS 7:
# yum install lapack-devel
To install on Ubuntu 14.04:
# apt-get install liblapack-dev gfortran
The simulator is distributed as part of library sources. It is recommended to be installed within virtual environment.
$ git clone https://git.openstack.org/openstack/oslo.messaging
$ cd oslo.messaging/
$ virtualenv .venv
$ source .venv/bin/activate
$ pip install -r requirements.txt
$ python setup.py install
$ pip install scipy
$ cd tools/
5.15.2.1.2. Test Case 1: RPC Call Throughput Test¶
Test case specification: Test Case 1: RPC Call Throughput Test
Execution:
Start the server:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ rpc-server
example: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ --debug true rpc-server
Start the client:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ rpc-client -p <threads> -m <messages>
example: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ rpc-client -p 10 -m 100
5.15.2.1.3. Test Case 2: RPC Cast Throughput Test¶
Test case specification: Test Case 2: RPC Cast Throughput Test
Execution:
Start the server:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ rpc-server
example: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ --debug true rpc-server
Start the client:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ rpc-client --is-cast true -p <threads> -m <messages>
example: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ rpc-client --is-cast true -p 10 -m 100
5.15.2.1.4. Test Case 3: Notification Throughput Test¶
Note
Version at least 2.9 is required to run this test case.
Test case specification: Test Case 3: Notification Throughput Test
Execution:
Start the server:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ notify-server
examples:: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ notify-server
Start the client:
$ python simulator.py --url rabbit://<username>:<password>@<host>:<port>/ notify-client -p <threads> -m <messages>
example: python simulator.py --url rabbit://nova:DUoqsyrq@192.168.0.4:5673/ notify-client -p 10 -m 100
5.15.3. Reports¶
- Test plan execution reports: