Groups
Last updated
Last updated
Copyright (c) Taskforce.sh Inc.
Groups allows you to use a single queue while distributing the jobs among groups so that the jobs are processed one by one relative to the group they belong to.
For example, imagine that you have 1 queue for processing video transcoding for all your users, you may have thousands of users in your application. You need to offload the transcoding operation since it is lengthy and CPU consuming. If you have many users that want to transcode many files, then in a non-grouped queue one user could fill the queue with jobs and the rest of the users will need to wait for that user to complete all its jobs before their jobs get processed.
Groups resolves this problem since jobs will be processed in a "round-robin" fashion among all the users.
If you have several workers or a concurrency factor larger than one, jobs will be processed in parallel, but they will be picked up from the groups as mentioned before following a round-robin ordering.
Of course you can have as many workers as you want and also scale up/down the amount of workers depending on how many jobs you have waiting in the queue.
If you only use grouped jobs in a queue, the waiting jobs list will not grow, instead it will just keep the next job to be processed if any. But you can add non-grouped jobs to the same queue, and they will get precedence from the jobs waiting in their respective groups.
There is no hard limit on the amount of groups that you can have, nor do they have any impact on performance. When a group is empty, the group itself does not consume any resources in Redis.
Another way to see groups is like "virtual" queues. So instead of having one queue per "user", you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.
In order to use the group functionality, use the group property in the job options when adding a job:
In order to process the jobs, use a pro worker as you normally do with standard workers: