A Short Update
After writing the Rust AVL Tree Set post, I decided to study the parser combinator library, nom, to work on a Rust Lisp interpreter next. I am currently stuck on implementing custom parsers but will work through it nonetheless. During my work this week, I needed to decide what message queue to use for my project since we are using a micro service architecture. Majority of the services or modules are Elixir umbrella apps, so it would be fascinating to consider the native distributed remote call Erlang is famous for. Alas, I cannot make the assumption that every service is written in Elixir and researching message queues will better prepare me for the future.
After three days of research and attempts, this is the findings I had for each message queue:
- The default message queue to consider. Good documentation and library support makes it quite viable; however, production headaches and war stories made me reconsider safer alternatives.
- My preferred choice for production stability; however, it requires as a heavy infrastructure dependency (Zookeeper) that is beyond the limits of the project.
- A low level and lightweight message queue that garnered my interest. Sadly, it is lacked out of the box support and the Elixir (not Erlang) support.
- I would have picked this message queue if it had good documentation for its Elixir client since it offers a good balance between weight and stability.
Despite not selecting them, they are still good choices for different circumstances. I stumbled across NSQ message queue that is lightweight and distributed. However, I bit off more than I can chew as I thought the Elixir client library was good enough since no documentation exist in managing consumer and producer processes as a whole. I pondered making a wrapper library for my project's needs.
Before I slept, I found conduit, a generic message queue library that
can be configured to work with Amazon SQS and AMQP. Perhaps, it can be
configured for NSQ? Three days later, I published conduit_nsq.
Although not perfect or battle tested yet, I can now use
plug-like composability that drew me in the first place:
defmodule MyApp.Broker do use Conduit.Broker, otp_app: :my_app configure do queue("my-topic") end pipeline :in_tracking do plug(Conduit.Plug.CorrelationId) plug(Conduit.Plug.LogIncoming, log: :debug) end pipeline :out_tracking do plug(Conduit.Plug.CorrelationId) plug(Conduit.Plug.CreatedBy, app: "MyApp") plug(Conduit.Plug.CreatedAt) plug(Conduit.Plug.LogOutgoing, log: :debug) end pipeline :serialize do plug(Conduit.Plug.Wrap) plug(Conduit.Plug.Encode, content_encoding: "json") end pipeline :deserialize do plug(Conduit.Plug.Decode, content_encoding: "json") plug(Conduit.Plug.Unwrap) end pipeline :error_handling do plug(Conduit.Plug.NackException) plug(Conduit.Plug.DeadLetter, broker: MyApp.Broker, publish_to: :error) plug(Conduit.Plug.Retry, attempts: 3) end incoming MyApp do pipe_through([:in_tracking, :error_handling, :deserialize]) subscribe(:my_subscriber, BasicSubscriber, topic: "my-topic", channel: "my-channel") end outgoing do pipe_through([:out_tracking, :serialize]) publish(:my_publisher, topic: "my-topic") end end
The strange thing about the wrapper implementation is that it could easily use another client library. If NSQ does not work out, it can be easily changed to nats.ex or fallback to conduit_amqp. Either way, now that the adapter is more or less done. I still have to handle at least one delivery or data deduplication from the client-side that can be interestingly implemented in a plug library which maybe a story for another time.