Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Mar 27 10:33
    vo-va closed #426
  • Mar 26 15:30
    vo-va edited #426
  • Mar 26 15:29
    vo-va opened #426
  • Mar 23 15:04
    rajeshyogeshwar commented #425
  • Mar 23 14:56
    Koed00 commented #425
  • Mar 23 14:02
    rajeshyogeshwar opened #425
  • Mar 21 16:01
    Solanar commented #424
  • Mar 21 14:07
    OnufryKlaczynski commented #424
  • Mar 21 14:07
    OnufryKlaczynski commented #424
  • Mar 21 12:48
    OnufryKlaczynski commented #424
  • Mar 19 03:01
    yyken commented #422
  • Mar 18 04:03
    Solanar commented #424
  • Mar 18 03:49
    codecov-io commented #409
  • Mar 18 03:46
    codecov-io commented #409
  • Mar 18 03:43
    codecov-io commented #409
  • Mar 18 03:43
    Solanar synchronize #409
  • Mar 17 18:50
    Waszker commented #194
  • Mar 17 18:12
    mm-matthias commented #194
  • Mar 17 17:52
    Waszker commented #194
  • Mar 16 08:49
    Koed00 commented #341
Ilan Steemers
@Koed00
I could probably fix that easily by replace the create with an update_or_create
Clearly I haven'
t tested this enough
Kamal Mustafa
@k4ml

Btw, the docs mentioned this:-

  • In case a task is worked on twice, you will see a duplicate key error in the cluster logs.
  • Duplicate tasks do generate additional receipt messages, but the result is discarded in favor of the first result.

Is that related ?

Ilan Steemers
@Koed00
Sort of. This was a particular to a something that happens a lot in IronMQ. Because of the non atomic nature of their system, sometimes a task will be requeued after it has been acknowledged.
This happens on SQS too, occasionally, if the latency to the SQS server instance is very high.
So this was originally written from that perspective.
Lately I've been working more with at-least-once deliveries and I'm learning more about the problems associated with them. Like external services being down or causing errors. In this case you want every single retry to update your task result until it succeeds.
Ilan Steemers
@Koed00
I'm just trying to decide if I need to provision something for tasks that already have succeeded, then get requeued unintentionally due to latencye and then fail because of conflicts with the original payload. Like trying to create an object twice.
Ilan Steemers
@Koed00
@k4ml I've updated the code in the dev branch to only update existing tasks if they have previously failed. This should probably work for most circumstance. Have a look at it if you want. I'll have to write new tests for it before I can pull and release it though.
Kamal Mustafa
@k4ml
Sure. Will check it tomorrow, off from work today. Btw, I read about task's indempotent and ack_late settings in celery's doc, which I initially thought the reason for you not retrying the task - http://docs.celeryproject.org/en/latest/userguide/tasks.html
Ilan Steemers
@Koed00
No you're giving me too much credit. I stayed away from the Celery docs cause I don't agree with their philosophy of converting everything to AMQP. Which dictates that the ack should happen in the same session as the task pull happens, otherwise the is considered to be not pulled.
So the same process that pulls the task must also ack it. Which basically prevents any other broker type and django-q from working with this protocol, cause they all assume that an ack can be made out of process.
Ilan Steemers
@Koed00
I found the mechanics behind Disque (https://github.com/antirez/disque) more sensible so that's what I followed initially.
Kamal Mustafa
@k4ml
What is the proper way to remove task from queue ?
Kamal Mustafa
@k4ml
I noticed that when the task being enqueued to the broker, the return value is not kept anywhere - https://github.com/Koed00/django-q/blob/d39868534594488492bf0aa61ea7bca727dd6d0d/django_q/tasks.py#L51
Ilan Steemers
@Koed00
When a worker dequeues an SQS task, it receives a receipt_handle. This is then used to delete the task from the queue after it has failed or completed.
Are you looking for a way to delete a task yourself, before it gets executed?
Kamal Mustafa
@k4ml
Yes. Sometimes because of issue in application, such as invalid recipient that not caught by our validation logic, the message will get stuck in the queue and always failed because it rejected by our upstream provider and we need to manually remove it from queue.
Ilan Steemers
@Koed00
Personally I use the dead letter queue feature of SQS. When a message fails an x number of times, it's moved to another queue. I then inspect that queue manually and see wether I want to delete these.
Kamal Mustafa
@k4ml
Thanks for the heads up. Never know about dead letter queue before. Look like we always need to loop through all the messages in the queue as there's no API to retrieve specific message - http://stackoverflow.com/questions/29132077/trying-to-get-message-via-message-id-and-boto
monty5811
@monty5811
hi, I'm attempting to migrate a project from celery to django-q. It has been pretty easy so far - thank you!
I am using vcrpy in my tests (to mock some web requests), is there any easy way to do this with django-q. (I'm using pytest if that matters)
Kamal Mustafa
@k4ml
How can you test the queue ? For me it enough to test the code reach correctly to the point of async() being called. Beyond that I just trust django-q will do it work.
monty5811
@monty5811
I got it working by monkeypatching tasks.async so that the function passed to it is called immediately :-)
Kamal Mustafa
@k4ml
there's sync parameter to the async() function. It still run through the cluster though, but skip the broker. In my case, I just put a flag where the function not added to the async() at all.
monty5811
@monty5811
yeah, I tried sync, but I am using vcrpy to mock some http requests - because the tasks are run in a different process, the mocking wasn't being applied. monkeypatching async seems to work ok, though :-)
monty5811
@monty5811
Looks like aysnc should be renamed https://docs.python.org/3.6/whatsnew/3.6.html#new-keywords ?
Peter Brooks
@pbrooks
When running async, I get "too many values to unpack"
Ah, got it
Peter Brooks
@pbrooks
Excellent, I have my running django-q and redis in production now.
Currently, it's got the scheduled/completed table in the admin, is that being stored in the app database? ie Postgres?
And hey @Koed00, good to see the project doing well.
Ilan Steemers
@Koed00
@pbrooks yep. If you're using base ORM broker, then all that stuff is tracked in the django db.
You can specify a non default database for it too, so you could even store it in a separate database or server.
Unfortunately I haven't had as much time as I like to work on Django-Q. Mainly on the weekends now.
Peter Brooks
@pbrooks
@Koed00 Oh right, let me check over the settings. I added redis but must of missed something!
I'm using Django in anger now, so was happy to come back to using django-q
Peter Brooks
@pbrooks
db not set?
Peter Brooks
@pbrooks
Can tests exist around schedule? I.e in my test I say "Schedule this for in an hour", ideally I'd like to see the code execute immediantely in that case.
dangerski
@dangerski
Any chance this can be merged? Koed00/django-q#162 It fixes a bug with the timeout.
Vitaly Babiy
@vbabiy
Has anyone tried to get django-q to work with nsq?
Kamal Mustafa
@k4ml
New 1.0.0 release seem to break with:-
 from django_q import tasks
ImportError: cannot import name 'tasks'
Ilan Steemers
@Koed00
I need a little more info Kamal. Which version of Python are you on?
Kamal Mustafa
@k4ml
@Koed00 Here to reproduce:-
virtualenv -p python3 test-django-q
cd test-django-q
bin/pip install django-q
...
Installing collected packages: six, python-dateutil, arrow, django-picklefield, wcwidth, blessed, pytz, django, django-q
Successfully installed arrow-0.12.1 blessed-1.15.0 django-2.0.8 django-picklefield-1.0.0 django-q-1.0.0 python-dateutil-2.7.3 pytz-2018.5 six-1.1
1.0 wcwidth-0.1.7
bin/django-admin startproject testq
cd testq/
../bin/python manage.py shell
>>> import sys;sys.version
'3.4.3 (default, Nov 28 2017, 16:41:13) \n[GCC 4.8.4]'
>>> import django_q.tasks
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/home/kamal/tmp/test-django-q/lib/python3.4/site-packages/django_q/tasks.py", line 12, in <module>
    from django_q.cluster import worker, monitor
  File "/home/kamal/tmp/test-django-q/lib/python3.4/site-packages/django_q/cluster.py", line 24, in <module>
    from django_q import tasks
ImportError: cannot import name 'tasks'
Ilan Steemers
@Koed00
End of Life for Python 3.4 is like 6 months away. I'll have a look if I can do a quick fix for it.
But I can't guarantee it if it becomes a huge issue. I just don't have the time for it.
Ghost
@ghost~5bf58c86d73408ce4fafb0c0
Hello! Yesterday i dived into Celery, can you tell me why should i pick django-q over Celery? Where its better and when its worse vs Celery?
Pascal de Sélys
@scwall
@Koed00 Hello, I decided to integrate your library into my project, I was wondering if you would continue to maintain the project? Thank you in advance
Rene Seiler
@reneseiler_twitter
Hello, I just realised that django-q 1.0.0 is not working with arrow-0.14.6. I receive unknown attribute "minutes" or unkno
or unknown attribute "hour" errors in my log and scheduled tasks are not executed. It works with arrow==0.12.1