The library is designed so that it will fulfil the following goals:
Exactly once queue semantics, i.e., attempts to deliver every message exactly one time, but it will deliver at least once in the worst case scenario*.
Easy to scale horizontally. Add more workers for processing jobs in parallel.
High performant. Try to get the highest possible throughput from Redis by combining efficient .lua scripts and pipelining.
View the repository, see open issues, and contribute back on GitHub!
If you are new to Message Queues, you may wonder why they are needed after all. Queues can solve many different problems in an elegant way, from smoothing out processing peaks to creating robust communication channels between micro-services or offloading heavy work from one server to many smaller workers, and many other cases. Check the Patterns section for getting some inspiration and information about best practices.
Minimal CPU usage due to a polling-free design
Distributed job execution based on Redis
LIFO and FIFO jobs
Scheduled and repeatable jobs according to cron specifications
Retries of failed jobs
Concurrency setting per worker
Threaded (sandboxed) processing functions
Automatic recovery from process crashes