Redis and RabbitMQ
These common ways of sharing and caching data in a system have their strengths and weaknesses. Here we show common patterns of using them with Hotrod.
Lists and Queues
A Redis list is equivalent to a RabbitMQ queue - you push items on one end and pop items off the other end.
# push a value to a Redis list
name: redis-write-queue
input:
text: '{"msg":"hello"}'
output:
redis:
set:
list: q
# pop a value from the end of a Redis list
name: redis-read-queue
input:
redis:
get:
list: q
raw: true
output:
write: console
We get:
$ hpr -f redis-write-queue.yml
$ hpr -f redis-read-queue.yml
{"msg":"hello"}
...
The input will now wait for anything new to be pushed to the queue. As with the other inputs, raw
indicates that the data will not be 'raw quoted' (like {"_raw":"{\"msg\":\"hello\""}
)
What if there are multiple consumers of the queue? Run redis-read-queue
in two new terminals,
and send new messages as below:
$ cat input.json
{"msg":"dolly"}
{"msg":"so nice"}
{"msg":"to see you back"}
{"msg":"where you belong"}
$ hpr -f redis-write-queue.yml --json --input @input.json
# First terminal output
{"msg":"dolly"}
{"msg":"to see you back"}
# Second terminal output
{"msg":"so nice"}
{"msg":"where you belong"}
Each consumer gets the next item available in turn - this is called 'round-robin'.
The RabbitMQ equivalent would be this pair of pipes:
name: rabbit-write-queue
input:
text: '{"msg":"hello"}'
output:
amqp:
queue:
name: some-name
name: rabbit-read-queue
input:
amqp:
raw: true
queue:
name: some-name
output:
write: console
And again, multiple consumers will get the items in round-robin order.
RabbitMQ provides a lot more control (which is why you might prefer its use on a server
over Redis) so there are more fields to consider. An important one is passive
:
queue:
name: some-name
passive: true
Which says - do not attempt to redeclare this queue. You will need this when you have defined the queue properties in the RabbitMQ server itself.
The amqp
model involves both queues and exchanges, which we need to change the default round-robin
behaviour. If an exchange is specified, then a queue will be defined implicitly.
For proper fan-out - where each consumer gets a copy of the messages - define an exchange explicitly:
name: rabbit-write-fanout
input:
text: '{"msg":"hello"}'
output:
amqp:
exchange:
name: some-name
type: fanout
name: rabbit-read-fanout
input:
amqp:
raw: true
exchange:
name: some-name
type: fanout
output:
write: console
Then all our consumers will get a copy of the message, as promised. (passive
also applies to
exchanges, because the default behavior is to try declare them if they do not exist)
Publish and Subscribe
Subscribers listen to an exchange, like the queue consumers above, but they are only interested in certain topics. If a publisher puts out a message with a routing key that matches the topic, then the subscriber gets the message.
The subscription topic is a set of words separated by dots; *
will mean "exactly one word"
and #
means "zero or many words".
In this pair of pipes, we are using context expansions for variety. In real world cases, this will allow you to customize the pipe parameters for each agent, and in this case, it will let us experiment with different routing keys without editing the pipes.
name: rabbit-publish
context:
TOPIC: animal.dog
input:
text: '{"msg":"hello","my_topic":"{{TOPIC}}"}'
output:
amqp:
exchange:
name: topic-exchange
type: topic
routing-key-field: my_topic
name: rabbit-subscribe
context:
TOPIC: animal.*
input:
amqp:
raw: true
exchange:
name: topic-exchange
type: topic
passive: true
routing-key: '{{TOPIC}}'
output:
write: console
The first pipe publishes the message, containing the topic name - fact that the routing key can be given as the value of a field can then be exploited.
$ hpr -f rabbit-publish.yml
$ hpr -f rabbit-publish.yml TOPIC='animal.cat'
# in another terminal
$ hpr -f rabbit-subscribe.yml
{"msg":"hello","my_topic":"animal.dog"}
{"msg":"hello","my_topic":"animal.cat"}
The subscribing pipe is interested in everything to do with animals (by default).
Redis does publish-subscribe as well. Note that the topic is matched with
glob
patterns (does not use '#') and there can be subscriptions to multiple topics.
name: redis-publish
context:
TOPIC: animal.dog
input:
exec:
raw: true
command: |
echo '{"msg":"hello","my_topic":"{{TOPIC}}"}'
output:
redis:
set:
publish: '${my_topic}'
name: redis-subscribe
input:
redis:
get:
subscribe:
- animal.*
raw: true
output:
write: console
Redis Hash Operations
Redis originally was a pure in-memory key-value store, where values could be set and retrieved
by key. But the values have types - we have already met the list
type, which operates like
a queue. There is also the hash
type, which is a map of fields to values.
output:
redis:
set:
hash-value: [key, field]
So the hash may have key 'employees', and the field is 'bob' - the value is some string (that we will usually choose to interpret as JSON).
Getting is similar:
input:
redis:
get:
hash-value: [key, field]
Flat JSON events map well to Redis hashes. One can use this to merge JSON from different sources:
{"one":1}
{"two":"hello"}
###
output:
redis:
set:
hash: keyname
input:
redis:
interval: 30s
get:
hash: keyname
###
{"one":1,"two":"hello"}
...
For instance, there are a few small pipes that each report some specialized knowledge of a system, and they all write to the same hash. A coordinating pipe reads this hash and sends the combined event to its desired destination. The beauty of this scheme is that the pipes do not have to coordinate their scheduling - Redis acts as the shared cache.
Can ask for the values inside the hash.
input:
redis:
interval: 1m
get:
hash-values: keyname
raw: true
###
1
hello
This again is useful for inter-pipe communication. On a server, events from different agents could be coming in, and being written to a hash keyed by the agent id. A collector pipe can then access all these events at intervals of its choosing (again, Redis as event cache),