Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 23 11:38
    Nimitt3 commented #491
  • Feb 23 11:30
    agronholm commented #491
  • Feb 23 11:27
    Nimitt3 commented #491
  • Feb 23 11:10
    agronholm labeled #491
  • Feb 23 11:10
    agronholm closed #491
  • Feb 23 11:10
    agronholm commented #491
  • Feb 23 10:38
    Nimitt3 opened #491
  • Feb 16 11:56
    jianmaikj commented #490
  • Feb 16 11:37
    agronholm closed #490
  • Feb 16 11:37
    agronholm commented #490
  • Feb 16 08:51
    jianmaikj opened #490
  • Feb 15 13:43
    agronholm commented #488
  • Feb 14 22:01
    agronholm commented #465
  • Feb 14 22:01
    agronholm commented #465
  • Feb 14 21:58
    agronholm commented #465
  • Feb 14 21:07
    agronholm unlabeled #489
  • Feb 14 21:07
    agronholm labeled #489
  • Feb 14 12:36
    thedrow commented #465
  • Feb 13 23:04
    agronholm commented #465
  • Feb 13 23:04
    agronholm commented #465
Forest Johnson
@ForestJohnson
I am using flask with logging dict config
logs work fine, its just that I have one specific task which runs every 5 seconds
I would like to avoid those logs being printed all the time. (currently we run our application in INFO log level in production)
I am making progress. I think i have it almost figured out
Alex Grönholm
@agronholm
maybe add a filter?
Forest Johnson
@ForestJohnson
sounds like something tthat someone who knows how to use python would do
D
:D
Forest Johnson
@ForestJohnson
ah. I thought filters could only remove logs, which is not what I want, I want to modify their log level

but it says

filter(record)

... If deemed appropriate, the record may be modified in-place by this method.

Alex Grönholm
@agronholm
why not just filter out that noisy job?
Forest Johnson
@ForestJohnson
Because I might want to access those logs later by turning on DEBUG logging temporarily
I have a silly solution that works, obviously a filter is the proper way. So I will try that :)
Forest Johnson
@ForestJohnson
well crumbs. actually the filter appears to not be able to modify the log level
but return False does filter them out
maybe I will simply return False if the current configured log level is not DEBUG
Forest Johnson
@ForestJohnson
thanks Alex Cheers!!!! :beers:
Alex Grönholm
@agronholm
:thumbsup:
cecemel
@cecemel
Hello, I encountered a "skipped: maximum number of running instances reached". The job should still be running , is there any way I can get information about this specific job (e.g. how long it has been running, what is was doing if there are any other things going on)
Alex Grönholm
@agronholm
not via the apscheduler api, sorry
you can log these things in the target function itself of course
cecemel
@cecemel
ok thanks!
It's weird state we ended up in, because normally this job is ultra stable (ran before for months). But anyway... Thanks. Looking forward to the timeout feature in APscheduler 4.0, no pressure :-)
cecemel
@cecemel
Just an extra question: We use the BackgroundScheduler, the way it locks job, I assume it is fully in memory? There is no lockfile or anything created that might be a 'residu' when the whole service running the scheduler restarts? (its a docker based service, if a jobs crash on unexpected exception, the service crashes and gets restarted automatically)
Alex Grönholm
@agronholm
no need for lock files as everything is memory (unless you use a persistent job store)
@cecemel
Mirkenan Kazımzade
@Kenan7
any way to control cron jobs? if it executed or not, if not executed retry
Alex Grönholm
@agronholm
@Kenan7 you can do that yourself with a try...except loop
maybe use a library like tenacity to do progressive back-off
Mirkenan Kazımzade
@Kenan7
where do I put try except? it's a cron job added by add_job method
Mirkenan Kazımzade
@Kenan7
@agronholm
Alex Grönholm
@agronholm
@Kenan7 in the function you scheduled
José Flores
@FloresFactorB_twitter
hello, somene know how get the failed jobs?
Alex Grönholm
@agronholm
@FloresFactorB_twitter I assume you mean you want to get notified when a job fails (this can happen any number of times depending on how many times the job is run)
add a listener to the scheduler
José Flores
@FloresFactorB_twitter
Basically I want to do something similar to what this endpoint does but to get me the jobs that failed and if they will run again. How can I achieve that in the simplest way?
@app.get("/schedule/show_schedules/",response_model=CurrentScheduledJobsResponse,tags=["schedule"])
async def get_scheduled_syncs():
    """
    Will provide a list of currently Scheduled Tasks

    """
    schedules = []
    for job in Schedule.get_jobs():
        schedules.append({"job_id": str(job.id), "run_frequency": str(job.trigger), "next_run": str(job.next_run_time)})
    return {"jobs":schedules}
Alex Grönholm
@agronholm
@FloresFactorB_twitter what do you mean by a failed job?
if a the target function of a job causes an exception, it will still be run on its next calculated run time
BPHT
@newTypeGeek
@agronholm I would like to clarify the CronTrigger expression: a-b. The doc said the job would Fire on any value within the a-b range (a must be smaller than b). So if I config hour='4-6', does it mean that it would only trigger the job once between hour 0400 and 0600 ?
what is the correct expression if want to trigger the job every hour between 0400 and 0600 (i.e. fires at time 0400, 0500 and 0600)
BPHT
@newTypeGeek
oh...I overlooked the doc, use , will do. So hour='4,5,6' will trigger the cronjob to fire at 0400, 0500 and 0600
Alex Grönholm
@agronholm
@newTypeGeek both will do that
like it says, any value between 4-6. Those values are 4,5,6.
Suhrob Malikov
@malikovss
hi there. how to give an attribute in job function?
Alex Grönholm
@agronholm
what do you mean?
an attribute? to what?
Suhrob Malikov
@malikovss
to function. f. e. def func(num:int):print(num); schedule.add_job(func,num=1) @agronholm
Alex Grönholm
@agronholm
you mean arguments, not attributes?
have you read the API documentation for add_job()?