Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 07:15
    agronholm commented #465
  • 00:33
    m3nu commented #465
  • Mar 01 13:08
    agronholm labeled #494
  • Mar 01 13:08
    agronholm closed #494
  • Mar 01 13:08
    agronholm commented #494
  • Mar 01 13:02
    felixdittrich92 opened #494
  • Feb 27 18:30
    agronholm edited #465
  • Feb 27 18:25
    agronholm edited #465
  • Feb 26 20:17

    agronholm on master

    Removed debugging code and comm… (compare)

  • Feb 26 07:48
    agronholm commented #488
  • Feb 26 07:46
    junior-senior commented #488
  • Feb 25 22:16

    agronholm on master

    Reformatted the version history… (compare)

  • Feb 25 22:07
    agronholm commented #465
  • Feb 25 22:01

    agronholm on ci-fix

    (compare)

  • Feb 25 22:01

    agronholm on master

    Fixed test setup issues (#493) … (compare)

  • Feb 25 22:01
    agronholm closed #493
  • Feb 25 22:00
    agronholm ready_for_review #493
  • Feb 25 21:57
    agronholm synchronize #493
  • Feb 25 21:57

    agronholm on ci-fix

    Switched run order (compare)

  • Feb 25 21:56
    agronholm synchronize #493
Alex Grönholm
@agronholm
maybe add a filter?
Forest Johnson
@ForestJohnson
sounds like something tthat someone who knows how to use python would do
D
:D
Forest Johnson
@ForestJohnson
ah. I thought filters could only remove logs, which is not what I want, I want to modify their log level

but it says

filter(record)

... If deemed appropriate, the record may be modified in-place by this method.

Alex Grönholm
@agronholm
why not just filter out that noisy job?
Forest Johnson
@ForestJohnson
Because I might want to access those logs later by turning on DEBUG logging temporarily
I have a silly solution that works, obviously a filter is the proper way. So I will try that :)
Forest Johnson
@ForestJohnson
well crumbs. actually the filter appears to not be able to modify the log level
but return False does filter them out
maybe I will simply return False if the current configured log level is not DEBUG
Forest Johnson
@ForestJohnson
thanks Alex Cheers!!!! :beers:
Alex Grönholm
@agronholm
:thumbsup:
cecemel
@cecemel
Hello, I encountered a "skipped: maximum number of running instances reached". The job should still be running , is there any way I can get information about this specific job (e.g. how long it has been running, what is was doing if there are any other things going on)
Alex Grönholm
@agronholm
not via the apscheduler api, sorry
you can log these things in the target function itself of course
cecemel
@cecemel
ok thanks!
It's weird state we ended up in, because normally this job is ultra stable (ran before for months). But anyway... Thanks. Looking forward to the timeout feature in APscheduler 4.0, no pressure :-)
cecemel
@cecemel
Just an extra question: We use the BackgroundScheduler, the way it locks job, I assume it is fully in memory? There is no lockfile or anything created that might be a 'residu' when the whole service running the scheduler restarts? (its a docker based service, if a jobs crash on unexpected exception, the service crashes and gets restarted automatically)
Alex Grönholm
@agronholm
no need for lock files as everything is memory (unless you use a persistent job store)
@cecemel
Mirkenan Kazımzade
@Kenan7
any way to control cron jobs? if it executed or not, if not executed retry
Alex Grönholm
@agronholm
@Kenan7 you can do that yourself with a try...except loop
maybe use a library like tenacity to do progressive back-off
Mirkenan Kazımzade
@Kenan7
where do I put try except? it's a cron job added by add_job method
Mirkenan Kazımzade
@Kenan7
@agronholm
Alex Grönholm
@agronholm
@Kenan7 in the function you scheduled
José Flores
@FloresFactorB_twitter
hello, somene know how get the failed jobs?
Alex Grönholm
@agronholm
@FloresFactorB_twitter I assume you mean you want to get notified when a job fails (this can happen any number of times depending on how many times the job is run)
add a listener to the scheduler
José Flores
@FloresFactorB_twitter
Basically I want to do something similar to what this endpoint does but to get me the jobs that failed and if they will run again. How can I achieve that in the simplest way?
@app.get("/schedule/show_schedules/",response_model=CurrentScheduledJobsResponse,tags=["schedule"])
async def get_scheduled_syncs():
    """
    Will provide a list of currently Scheduled Tasks

    """
    schedules = []
    for job in Schedule.get_jobs():
        schedules.append({"job_id": str(job.id), "run_frequency": str(job.trigger), "next_run": str(job.next_run_time)})
    return {"jobs":schedules}
Alex Grönholm
@agronholm
@FloresFactorB_twitter what do you mean by a failed job?
if a the target function of a job causes an exception, it will still be run on its next calculated run time
BPHT
@newTypeGeek
@agronholm I would like to clarify the CronTrigger expression: a-b. The doc said the job would Fire on any value within the a-b range (a must be smaller than b). So if I config hour='4-6', does it mean that it would only trigger the job once between hour 0400 and 0600 ?
what is the correct expression if want to trigger the job every hour between 0400 and 0600 (i.e. fires at time 0400, 0500 and 0600)
BPHT
@newTypeGeek
oh...I overlooked the doc, use , will do. So hour='4,5,6' will trigger the cronjob to fire at 0400, 0500 and 0600
Alex Grönholm
@agronholm
@newTypeGeek both will do that
like it says, any value between 4-6. Those values are 4,5,6.
Suhrob Malikov
@malikovss
hi there. how to give an attribute in job function?
Alex Grönholm
@agronholm
what do you mean?
an attribute? to what?
Suhrob Malikov
@malikovss
to function. f. e. def func(num:int):print(num); schedule.add_job(func,num=1) @agronholm
Alex Grönholm
@agronholm
you mean arguments, not attributes?
have you read the API documentation for add_job()?
Suhrob Malikov
@malikovss
@agronholm I couldn't find any example from doc.
Alex Grönholm
@agronholm
you found the API documentation for add_job() though?
Suhrob Malikov
@malikovss
@agronholm thanks