Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 20:51
    pre-commit-ci[bot] synchronize #702
  • 20:51
    pre-commit-ci[bot] edited #702
  • 20:51

    pre-commit-ci[bot] on pre-commit-ci-update-config

    [pre-commit.ci] pre-commit auto… (compare)

  • Feb 03 22:11
    Andrei-Pozolotin commented #707
  • Feb 01 00:22
    agronholm closed #707
  • Feb 01 00:22
    agronholm closed #705
  • Jan 31 23:22
    agronholm closed #704
  • Jan 31 23:22

    agronholm on master

    Fixed SQLAlchemy 2.0 compatibil… (compare)

  • Jan 31 22:55
    agronholm commented #704
  • Jan 31 22:37

    agronholm on 3.10.0

    (compare)

  • Jan 31 22:34

    agronholm on 3.x

    Updated actions (compare)

  • Jan 31 22:31

    agronholm on 3.x

    Fixed compatibility with SQLAlc… Dropped support for Python < 3.6 (compare)

  • Jan 31 13:01
    Andrei-Pozolotin labeled #707
  • Jan 31 13:01
    Andrei-Pozolotin opened #707
  • Jan 30 23:54
    santhanam87 commented #705
  • Jan 30 20:35
    pre-commit-ci[bot] synchronize #702
  • Jan 30 20:35
    pre-commit-ci[bot] edited #702
  • Jan 30 20:35

    pre-commit-ci[bot] on pre-commit-ci-update-config

    [pre-commit.ci] pre-commit auto… (compare)

  • Jan 29 22:34
    agronholm commented #705
  • Jan 29 22:33
    santhanam87 labeled #705
Konstantin Pankov
@c0nst_float_gitlab
Yes, it is
Alex Grönholm
@agronholm
that is my advice to you then
at least it's worth a shot
you can use the same database server, just don't make them share the same tables
Konstantin Pankov
@c0nst_float_gitlab
When can we expect the release of APScheduler 4.0?
Alex Grönholm
@agronholm
good question :) my estimates have been off in the past by quite a lot since I am not working regularly on it
just yesterday I managed to get master in a barely-working state
but nowhere near production quality
distributed schedulers and workers are functional but only 3 data store backends are provided: memory, postgresql and mongodb
Konstantin Pankov
@c0nst_float_gitlab
Okay, will be following the progress, wish you good luck and clear thoughts :3
On Monday I will try to optimize the code using asyncio-way planning and queuing tasks. Worked already about 14h... x_x
Alex Grönholm
@agronholm
my plan is to provide async support via sqlalchemy 1.4.0 (postgresql and mysql are supported)
Felix Dittrich
@felixdittrich92
Hello, is it possible to process/store jobs with priority if i use a MySql database as backend ?
Alex Grönholm
@agronholm
@felixdittrich92 well, there is no logic prioritizing jobs over others
how would that work?
Felix Dittrich
@felixdittrich92
To describe my problem in more detail, I have a RESTful API, which has 4 endpoints, these must be processed with priority. So if there are jobs in Endpoint then do them first, etc. If nothing would be a problem, however, I would need a job store because as soon as another instance of the API is created, it should automatically process the jobs from the store. As a prerequisite I have a MySQL database, unfortunately, no Redis and no message broker either.
So is there any way to achieve this with APScheduler ?
Alex Grönholm
@agronholm
@felixdittrich92 the current apscheduler schema does not allow job metadata to be stored, so you'd have to do a bit of customization work
a custom executor would be needed, in addition to scheduler and job store customization
Felix Dittrich
@felixdittrich92
ok, thank you for the answers :)
Naga Venkatesh Gavini
@venkateshgavini_twitter

Can anyone give me a rough estimation of machine requirements(CPU and mem) for the flask server with apscheduler?

  • It might create 3000 jobs in next 1 year and out of those 3000 jobs over 30 to 50 jobs runs at the same time.

I am thinking of 1 core CPU and 1 GB of memory would sufficient.

Alex Grönholm
@agronholm
that totally depends on the workload, not the number of jobs
Naga Venkatesh Gavini
@venkateshgavini_twitter
each job can complete is 1 second, because job function is just to make an API call
Alex Grönholm
@agronholm
so they're essentially network related jobs? just waiting on sockets?
Naga Venkatesh Gavini
@venkateshgavini_twitter
Yes alex
Alex Grönholm
@agronholm
I don't see a problem then
Naga Venkatesh Gavini
@venkateshgavini_twitter
Okay thanks alex
Alex Grönholm
@agronholm
1 GB of memory seems a bit tight
when running a database server on the same box
just thinking out loud
@venkateshgavini_twitter just so long as you don't try to use multiple schedulers on the same store
that only works on apscheduler 4 which does not even have a prerelease out yet
Naga Venkatesh Gavini
@venkateshgavini_twitter
My database will be on a seperate box and I use only one scheduler
So no issues then, but I will do load test once
Alex Grönholm
@agronholm
no problem then if there's no db on that box
Naga Venkatesh Gavini
@venkateshgavini_twitter
Thanks
Konstantin Pankov
@c0nst_float_gitlab
Hello again!
I was here a mouth ago) We decided to make architectural changes into our project. Digged the Dramatiq and SQS out. And now we are using AsyncIOScheduler with in-memory job-store. And before every scheduler launch, loads data for tasks from DB and form tasks. Now - everything is pretty good.
But there is one question: how is it possible to make a gracefull shutdown with waiting for currently running jobs in event_loop? scheduler.shutdown(wait=True)
Alex Grönholm
@agronholm
sadly there isn't – this is one of the shortcomings of the 3.x architecture
no proper async support
4.0 on the other hand has first class async support but it hasn't even reached its first prerelease yet
but you can experiment with the current master branch
basics should be working
Konstantin Pankov
@c0nst_float_gitlab

Sadly, sadly... Will try to come up with something to solve the problem.

I will describe our gracefull shutdown use case, maybe it will help in the development of version 4.0

1) Shutdown scheduler (stop scheduling new jobs)
2) Wait until all tasks are completed
3) Force exit if after 15 seconds if there is still uncompleted tasks (cause, there could be tasks that runs for some minutes)
4) Close DB and HTTP session pools

kxrxkt
@kxrxkt_twitter
hello. i would like to ask help for my problem. when i start the scheduler it succesfully calls the function and then throws an error at me:
TypeError: func must be a callable or a textual reference to one
i cant figure this out
Alex Grönholm
@agronholm
@kxrxkt_twitter are you maybe adding your job like this? scheduler.add_job(yourfunc(), ...)?
kxrxkt
@kxrxkt_twitter
image.png
@agronholm yes
Alex Grönholm
@agronholm

ok so consider the following:

def foo():
    return 1

print(foo())

What argument do you expect print() to be called with?

kxrxkt
@kxrxkt_twitter
the argument for print() here is func foo()
and?
oh