Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Manthan Mallikarjun
    @nahtnam
    Thanks for your help!
    I see other people have this same issue when putting code in the "error" callback so its not just me
    Sergii Bomko
    @aquiladev
    Hi guys, I have a question about bull repeatable jobs, I've posted an issue in GitHub OptimalBits/bull#1989
    I have build repeatable job which repeats each minute. When I run it in a single instance of application, everything happen as expected. There is 1 worker per queue and 1 job per queue, so when processing is longer then 1 min, next repeat waits until current is completed.
    But when I run multiple instances of applications, there is no waiting of completion of current one. Which is expected, because there are 1 worker per instance.
    It is important for me that workers wait until completion of current execution. Is there a way to solve the issue?
    I use bull v3
    Manuel Astudillo
    @manast
    I think the only way to solve this atm is by making sure you only have one worker running.
    or not using repeatable jobs, just add the next job when you are done with previous job
    if it executed faster than 1 minute just add a delay compensating, if it is slower than 1 minute add it without delay.
    Sergii Bomko
    @aquiladev
    can RateLimiter help in the case? something like:
    options.limiter = {
          max: 1,
          duration: Number.MAX_VALUE,
     };
    Sergii Bomko
    @aquiladev
    is there well known pattern to make sure that only one worker is present? I face situation when 2 workers already present, when I register producer. it happen when I try to run more then 3 instances of app
    Manuel Astudillo
    @manast
    the rate limiter will limit the output, but if your worker takes more time than what you have as limit then it will overlapp.
    Ayush
    @heyay:matrix.org
    [m]
    Hello
    How can I get the job by Id rather than calling getJobs
    Adam Lounds
    @adamlounds_twitter

    Hi, I'm looking at issue #1987 (rrule support) - can I double check some quirks around jobId?
    JobOpts documentation says

    jobId: number | string; // Override the job ID - by default, the job ID is a unique integer, but you can use this setting to override it.

    but if I don't specify a jobId on a repeating job then the jobId is created in the background from queueName, jobId (ie '') and the RepeatOpts.
    This means that two jobs in the same queue that have the same crontab get the same id - indeed there's a should create two jobs with the same idstest that asserts it.
    Is it intentional that I should not be able to add two weekly tasks to a queue?

    const q = new Queue('foo');
    q.add({ foo: 'bar' }, { repeat: { cron: '*/5 * * * *' });
    q.add({ foo: 'baz' }, { repeat: { cron: '*/5 * * * *' });

    results in only a single job in the queue

    Adam Lounds
    @adamlounds_twitter
    (I can workaround by always generating a unique jobId, but IMHO the JobOpts documentation implies this should not be necessary)
    Raz Luvaton
    @rluvaton
    Hey, great tool! Quick question, using Separate processes won’t help in case the server is crashing, right?
    Paul
    @paulm17
    Hey all, just came across this, looks very interesting. Is there a golang client?
    Jame
    @jamemackson
    I’m upgrading to bullmq from bull and am having issues getting repeating jobs to work properly. Are there any gotcha’s with repeating jobs to be aware of?
    specially I can get them to run if I click promote in taskForce (or call .promote()) on the job but am not seeing any repeat activity…
    Jame
    @jamemackson

    Hey, great tool! Quick question, using Separate processes won’t help in case the server is crashing, right?

    @paulm17 if each separate processes is on the same server, that won’t inherently help if the server is crashing, but the seprate processes will help make use of the additional compute capacity of the server assuming it’s not a single core instance. depending on the memory overhear of your application, you should be able to spin up 1 process per processing core you have available (and the memory to support)

    Running the QueueScheduler somewhere in your stack should help recover any jobs that were stuck or abandoned by a crashed server or process however. it is supposed to be periodically checking the active processes to detect dead/stuck jobs, etc. (this is why I’m in the middle of upgrading from the older version of bull)

    Paul
    @paulm17
    @jamemackson Wrong person :)
    Jame
    @jamemackson
    oh shoot, sorry about that!
    @rluvaton ^^
    Raz Luvaton
    @rluvaton
    Thanks @jamemackson I meant a Node server not a physical server
    Julian
    @xiJulian
    Why when I try to process the jobs using "Separate processes", each job runs 2-3 times, but when I do it normally without a separate file each job runs one time only
    why does that happen
    Jame
    @jamemackson
    what’s the proper way to trigger a job failure with bullmq? seems just throwing an exception doesn’t do it and I’m not seeing clear guidence on this in the docs. TIA for any assistance with this!
    Manuel Astudillo
    @manast
    @jamemackson yes, unhandled exceptions are caught by BullMQ and will put the job in the failed set.
    Sukhvir
    @sukhvir96
    Hi all, Can anyone suggests the solution bull is not processing all jobs in the queue. please see the attached video.
    image.png
    image.png
    image.png
    image.png
    Manuel Astudillo
    @manast
    @sukhvir96 do not mix "done" with "async", either use one or the other not both.
    I am talking in bull's processing function of course.
    Sukhvir
    @sukhvir96
    @manast I have removed done but still facing the same issue.
    Manuel Astudillo
    @manast
    @sukhvir96 it should work. I cannot see anything wrong with the pieces of information that you are sharing.
    Sukhvir
    @sukhvir96
    @manast it's working on my local system but when I deploy it on the server it's behaving differently as shown in the attached video. could you please suggest do I need to do a specific configuration on my server?
    Mitchell Romney
    @MitchellRomney
    Is it possible to have a queue limited by groupKey, with different key values being held to different limits? Currently I had handled this by having a unique queue for each unique limit I needed.
    Mitchell Romney
    @MitchellRomney
    Follow up - Currently I also have the need for a "Global" rate limit on most of these queues, previously I had handled this by passing all relevant queues through 1 main queue with the global limit. Is there more efficient way to handle this? :)
    Shai Moria
    @shyimo

    Hi guys. i'm using bullmq (version 1.19.2) with ioredis (version 4.26.0) and redis cluster. i'm not able to insert jobs to my queue and i'm getting this error:
    "CROSSSLOT Keys in request don't hash to the same slot"
    i'm follow the docs and from what i see everything config as expected. this is my code:

    const nodes = [{
          port: config.port,
          host: config.host
        }];
    
        const ioRedisClient = new IORedis.Cluster(nodes, {
          enableOfflineQueue: true,
          scaleReads: 'all',
          enableReadyCheck: true
        });
    
        const bullMqOptions = {
          connection: ioRedisClient,
          prefix: '{bullMQ}'
        };
    
        queueScheduler = new QueueScheduler(config.queueName, bullMqOptions);
        queue = new Queue(config.queueName, bullMqOptions);

    and using this to add job:
    queue.add('job-name', data)

    thanks for any help in advance !

    nevaehph
    @nevaehph

    Hi, I am trying to setup a repeatable job to send emails using the code below:

    reminderQueue.add(
      {
         id: transactionId,
      },
      {
         attempts: 5,
          repeat: {
          every: 12 * 3600000,
          limit: 3,
      },
      jobId: "reminder_" + transactionId,
      removeOnComplete: true,
      }
    );

    For shorter time span it seems fine, but for longer time spans (between 12 -48 hours), the next timing seems to be really off. For example, when I created the job at 5pm, the repeatable job I received using getRepeatableJobs() is:

    {
          key: '__default__:reminder_6081416b4c8c433b78d2515f::43200000',
          name: '__default__',
          id: 'reminder_6081416b4c8c433b78d2515f',
          endDate: null,
          tz: null,
          cron: null,
          every: 43200000,
          next: 1619092800000
    }

    Which is 8pm on the same day (UTC +8). May I check what could be a potential problem? Thanks in advance!

    Manuel Astudillo
    @manast
    for such large spans I recommend you to use the cron expression instead of "every".
    the reason is that every divides the time in slots, and with such big time spans you will not get what you are expecting.
    Manuel Astudillo
    @manast
    I decided that it is better to move to Slack, so please join the #bullmq-support channel with this link and then you will get faster support: https://join.slack.com/t/bullmq/shared_invite/zt-pee9ecfm-Gwncv3oXPNBpL3nrB95sIw
    Niv Lipetz
    @NivLipetz

    Hi guys. i'm using bullmq (version 1.19.2) with ioredis (version 4.26.0) and redis cluster. i'm not able to insert jobs to my queue and i'm getting this error:
    "CROSSSLOT Keys in request don't hash to the same slot"
    i'm follow the docs and from what i see everything config as expected. this is my code:

    const nodes = [{
          port: config.port,
          host: config.host
        }];
    
        const ioRedisClient = new IORedis.Cluster(nodes, {
          enableOfflineQueue: true,
          scaleReads: 'all',
          enableReadyCheck: true
        });
    
        const bullMqOptions = {
          connection: ioRedisClient,
          prefix: '{bullMQ}'
        };
    
        queueScheduler = new QueueScheduler(config.queueName, bullMqOptions);
        queue = new Queue(config.queueName, bullMqOptions);

    and using this to add job:
    queue.add('job-name', data)

    thanks for any help in advance !

    I am also experiencing this

    Henry Arbolaez
    @harbolaez
    Screen Shot 2021-05-07 at 12.01.28 PM.png
    Hi guys - what I'm doing wrong here? When I ran the queue it does queue and creates some jobs ids, but the worker seems not to be picking them up?
    const emailWorker = new Worker(queues.email.name, async (job) => { console.log(job.data); });
    Peter Kota
    @kotapeter

    Hey :wave:

    I'd like to process jobs in a parallel way based on groupKey. Is it possible?

    let's say we have a CI tool like circleCI. we need to run builds (in order) parallel per customers. I thought that the groupKey can handle it.

    I had tried the following:

    Worker

    new Worker(
        'build',
        async ({ data }) => executeBuild(data.idEnv, data.idBuild),
        {
          connection: {
            host: process.env.REDIS_HOST,
            port: process.env.REDIS_PORT ? Number(process.env.REDIS_PORT) : 6379,
            db: 3,
          },
          limiter: {
            max: 10,
            duration: 1000,
            groupKey: 'idEnv',
          },
        },
      )

    Queue

    const queue = new Queue('deployment', {
      limiter: {
        groupKey: 'idEnv',
      },
      connection: {
        host: process.env.REDIS_HOST,
        db: 3,
        port: process.env.REDIS_PORT ? Number(process.env.REDIS_PORT) : 6379
      }
    })

    Add to queue

    await queue.add('build', {
        idEnv: env.id,
        idBuild: build.id,
      });

    do you have any idea how can I run the jobs parallel per environments? Thank you!