[ English | русский | Deutsch | español | English (United Kingdom) | Indonesia ]

Messaging configuration

This section provides an overview of hybrid messaging deployment concepts and describes the necessary steps for a working OpenStack-Ansible (OSA) deployment where RPC and Notify communications are separated and integrated with different messaging server backends.

oslo.messaging library

The oslo.messaging library adalah bagian dari proyek OpenStack Oslo yang menyediakan kemampuan pengiriman pesan intra-service. Library mendukung dua pola komunikasi (RPC dan Notify) dan menyediakan abstraksi yang menyembunyikan detail operasi bus perpesanan dari layanan OpenStack.

Notifikasi

Memberitahu komunikasi adalah pertukaran asinkron dari notifier ke listener. Pesan yang ditransfer biasanya berkaitan dengan pembaruan informasi atau kejadian acara yang diterbitkan oleh layanan OpenStack. Pendengar tidak perlu hadir ketika pemberitahuan dikirim karena pemberitahuan komunikasi dipisahkan sementara. Decoupling antara notifier dan listener ini mensyaratkan bahwa backend perpesanan yang digunakan untuk notifikasi memberikan ketekunan pesan seperti broker queue atau log store. Perlu dicatat bahwa transfer pesan adalah searah dari notifier ke listener dan tidak ada pesan yang mengalir kembali ke notifier.

RPC

RPC dimaksudkan sebagai pertukaran sinkron antara klien dan server yang sementara diberi tanda kurung. Informasi yang ditransfer biasanya sesuai dengan pola request-response untuk permintaan perintah layanan. Jika server tidak ada pada saat perintah dipanggil, panggilan akan gagal. Penggabungan temporal mensyaratkan bahwa backend pesan yang digunakan mendukung transfer dua arah permintaan dari pemanggil ke server dan balasan terkait yang dikirim dari server kembali ke pemanggil. Persyaratan ini dapat dipenuhi oleh antrian broker atau server backend perpesanan langsung.

Transportasi pengiriman pesan

oslo.messaging library mendukung kemampuan transport plugin perpesanan sehingga komunikasi RPC dan Notify dapat dipisahkan dan berbagai server backend perpesanan dapat digunakan.

Driver oslo.messaging menyediakan integrasi transportasi untuk protokol dan server backend yang dipilih. Tabel berikut merangkum driver oslo.messaging yang didukung dan layanan komunikasi yang didukungnya.

+----------------+-----------+-----------+-----+--------+-----------+
| Oslo.Messaging | Transport |  Backend  | RPC | Notify | Messaging |
|     Driver     | Protocol  |  Server   |     |        |   Type    |
+================+===========+===========+=====+========+===========+
|     rabbit     | AMQP V0.9 | rabbitmq  | yes |   yes  |   queue   |
+----------------+-----------+-----------+-----+--------+-----------+
|     kafka      |  kafka    |  kafka    |     |   yes  |   queue   |
| (experimental) |  binary   |           |     |        |  (stream) |
+----------------+-----------+-----------+-----+--------+-----------+

Penerapan standar server rabbitmq

Backend server rabbitmq tunggal (misal server atau cluster) adalah penyebaran default untuk OSA. Backend perpesanan broker ini menyediakan layanan antrian untuk komunikasi RPC dan Notification melalui integrasinya dengan oslo.messaging rabbit driver. File oslo-messaging.yml menyediakan konfigurasi default untuk mengaitkan oslo.messaging RPC dan layanan Notify ke backend server rabbitmq.

# Quorum Queues
oslomsg_rabbit_quorum_queues: "{{ rabbitmq_queue_replication }}"

# NOTE(noonedeadpunk): Disabled due to missing oslo.concurrency lock_path defenition
#                      for services
oslomsg_rabbit_queue_manager: False

# RPC
oslomsg_rpc_transport: 'rabbit'
oslomsg_rpc_port: "{{ rabbitmq_port }}"
oslomsg_rpc_servers: "{{ rabbitmq_servers }}"
oslomsg_rpc_use_ssl: "{{ rabbitmq_use_ssl }}"
oslomsg_rpc_host_group: "{{ rabbitmq_host_group }}"
oslomsg_rpc_policies: "{{ rabbitmq_policies }}"

# Notify
oslomsg_notify_transport: "{{ (groups[rabbitmq_host_group] | length > 0) | ternary('rabbit', 'none') }}"
oslomsg_notify_port: "{{ rabbitmq_port }}"
oslomsg_notify_servers: "{{ rabbitmq_servers }}"
oslomsg_notify_use_ssl: "{{ rabbitmq_use_ssl }}"
oslomsg_notify_host_group: "{{ rabbitmq_host_group }}"
oslomsg_notify_policies: "{{ rabbitmq_policies }}"

Managing RabbitMQ stream policy

When deploying RabbitMQ with support for quorum and stream queues, the retention behaviour for messages changes. Stream queues maintain an append only log on disk of all messages received until a retention policy indicates they should be disposed of. By default, this policy is set with a per-stream x-max-age of 1800 seconds. However, as noted in the RabbitMQ docs, this only comes into effect ones a stream has accumulated enough messages to fill a segment, which has a default size of 500MB.

If you would like to reduce disk usage, an additional policy can be applied via OpenStack Ansible as shown below:

rabbitmq_policies:
  - name: CQv2
    pattern: '.*'
    priority: 0
    tags:
      queue-version: 2
    state: >-
      {{
        ((oslomsg_rabbit_quorum_queues | default(True) or not rabbitmq_queue_replication) and rabbitmq_install_method | default('') != 'distro'
          ) | ternary('present', 'absent')
      }}
# The following is an example of an additional policy which applies to fanout/stream queues only
# By default, each stream uses RabbitMQ's 500MB segment size, and no messages will be discarded
# until that size is reached. This may result in undesirable disk usage.
# If using this policy, it must be applied BEFORE any stream queues are created.
# See also https://bugs.launchpad.net/oslo.messaging/+bug/2089845 and https://www.rabbitmq.com/docs/streams#retention
#  - name: CQv2F
#    pattern: '^.*_fanout'
#    priority: 1
#    tags:
#      queue-version: 2
#      stream-max-segment-size-bytes: 1000000
#    state: >-
#      {{
#        ((oslomsg_rabbit_quorum_queues | default(True) or not rabbitmq_queue_replication) and rabbitmq_install_method | default('') != 'distro'
#          ) | ternary('present', 'absent')
#      }}

Note however, that this policy will only apply if it is in place before any stream queues are created. If these already exist, they will need to be manually deleted and re-created by the relevant OpenStack service.

This issue is being tracked in an oslo.messaging bug.