BullMQ
  • What is BullMQ
  • Quick Start
  • API Reference
  • Changelogs
    • v4
    • v3
    • v2
    • v1
  • Guide
    • Introduction
    • Connections
    • Queues
      • Auto-removal of jobs
      • Adding jobs in bulk
      • Global Concurrency
      • Removing Jobs
    • Workers
      • Auto-removal of jobs
      • Concurrency
      • Graceful shutdown
      • Stalled Jobs
      • Sandboxed processors
      • Pausing queues
    • Jobs
      • FIFO
      • LIFO
      • Job Ids
      • Job Data
      • Deduplication
      • Delayed
      • Repeatable
      • Prioritized
      • Removing jobs
      • Stalled
      • Getters
    • Job Schedulers
      • Repeat Strategies
      • Repeat options
      • Manage Job Schedulers
    • Flows
      • Adding flows in bulk
      • Get Flow Tree
      • Fail Parent
      • Continue Parent
      • Remove Dependency
      • Ignore Dependency
      • Remove Child Dependency
    • Metrics
      • Prometheus
    • Rate limiting
    • Parallelism and Concurrency
    • Retrying failing jobs
    • Returning job data
    • Events
      • Create Custom Events
    • Telemetry
      • Getting started
      • Running Jaeger
      • Running a simple example
    • QueueScheduler
    • Redisâ„¢ Compatibility
      • Dragonfly
    • Redisâ„¢ hosting
      • AWS MemoryDB
      • AWS Elasticache
    • Architecture
    • NestJs
      • Producers
      • Queue Events Listeners
    • Going to production
    • Migration to newer versions
    • Troubleshooting
  • Patterns
    • Adding jobs in bulk across different queues
    • Manually processing jobs
    • Named Processor
    • Flows
    • Idempotent jobs
    • Throttle jobs
    • Process Step Jobs
    • Failing fast when Redis is down
    • Stop retrying jobs
    • Timeout jobs
    • Timeout for Sandboxed processors
    • Redis Cluster
  • BullMQ Pro
    • Introduction
    • Install
    • Observables
      • Cancelation
    • Groups
      • Getters
      • Rate limiting
      • Local group rate limit
      • Concurrency
      • Local group concurrency
      • Max group size
      • Pausing groups
      • Prioritized intra-groups
      • Sandboxes for groups
    • Telemetry
    • Batches
    • NestJs
      • Producers
      • Queue Events Listeners
      • API Reference
      • Changelog
    • API Reference
    • Changelog
    • New Releases
    • Support
  • Bull
    • Introduction
    • Install
    • Quick Guide
    • Important Notes
    • Reference
    • Patterns
      • Persistent connections
      • Message queue
      • Returning Job Completions
      • Reusing Redis Connections
      • Redis cluster
      • Custom backoff strategy
      • Debugging
      • Manually fetching jobs
  • Python
    • Introduction
    • Changelog
Powered by GitBook

Copyright (c) Taskforce.sh Inc.

On this page

Was this helpful?

  1. BullMQ Pro

Groups

PreviousCancelationNextGetters

Last updated 2 months ago

Was this helpful?

Groups allows you to use a single queue while distributing the jobs among groups so that the jobs are processed one by one relative to the group they belong to.

For example, imagine that you have 1 queue for processing video transcoding for all your users, you may have thousands of users in your application. You need to offload the transcoding operation since it is lengthy and CPU consuming. If you have many users that want to transcode many files, then in a non-grouped queue one user could fill the queue with jobs and the rest of the users will need to wait for that user to complete all its jobs before their jobs get processed.

Groups resolves this problem since jobs will be processed in a "" fashion among all the users.

If you have several workers or a concurrency factor larger than one, jobs will be processed in parallel, but they will be picked up from the groups as mentioned before following a round-robin ordering.

Of course you can have as many workers as you want and also scale up/down the amount of workers depending on how many jobs you have waiting in the queue.

If you only use grouped jobs in a queue, the waiting jobs list will not grow, instead it will just keep the next job to be processed if any. But you can add non-grouped jobs to the same queue, and they will get precedence from the jobs waiting in their respective groups.

There is no hard limit on the amount of groups that you can have, nor do they have any impact on performance. When a group is empty, the group itself does not consume any resources in Redis.

Another way to see groups is like "virtual" queues. So instead of having one queue per "user", you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.

In order to use the group functionality, use the group property in the job options when adding a job:

import { QueuePro } from '@taskforcesh/bullmq-pro';

const queue = new QueuePro();

const job1 = await queue.add(
  'test',
  { foo: 'bar1' },
  {
    group: {
      id: 1,
    },
  },
);

const job2 = await queue.add(
  'test',
  { foo: 'bar2' },
  {
    group: {
      id: 2,
    },
  },
);

In order to process the jobs, use a pro worker as you normally do with standard workers:

import { WorkerPro } from '@taskforcesh/bullmq-pro';

const worker = new WorkerPro('test', async job => {
  // Do something usefull.

  // You can also do something different depending on the group
  await doSomethingSpecialForMyGroup(job.opts.group);
});
round-robin