Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:14
    RohinBhargava commented #514
  • 09:20
    b4stien commented #514
  • 07:47
    agronholm commented #514
  • 07:47
    agronholm commented #514
  • 03:30
    RohinBhargava commented #514
  • Jul 29 09:10
    agronholm commented #533
  • Jul 28 17:02
    williamhatcher opened #533
  • Jul 28 11:05
    agronholm labeled #532
  • Jul 28 09:03
    CGpythoner closed #532
  • Jul 28 09:03
    CGpythoner commented #532
  • Jul 27 07:33
    agronholm commented #532
  • Jul 27 01:31
    CGpythoner opened #532
  • Jul 26 21:18
    agronholm labeled #531
  • Jul 26 21:17
    sachoker closed #531
  • Jul 26 21:17
    sachoker commented #531
  • Jul 26 21:16
    agronholm commented #531
  • Jul 26 21:16
    agronholm commented #531
  • Jul 26 21:13
    sachoker opened #531
  • Jul 21 11:50
    agronholm commented #530
  • Jul 21 09:58
    sprototles commented #530
Forest Johnson
@ForestJohnson
or, set the log level of specific tasks ?
    def _configure(self, config):
        # Set general options
        self._logger = maybe_ref(config.pop('logger', None)) or getLogger('apscheduler.scheduler')
so maybe one can pass a logger in ?
Forest Johnson
@ForestJohnson
I can try this
no dice :(
Forest Johnson
@ForestJohnson
it works when I do it to the flask logger
it looks like apscheduler is not picking up the logger im passing in
# monkey patch the logger_instance.info() function to call logger_instance.debug()
# https://stackoverflow.com/questions/31590152/monkey-patching-a-property

class LoggerWhichLogsInfoMessagesAtDebugLevel(logging.Logger):
  def info(self, msg, *args, **kwargs):
    self.debug(msg, *args, **kwargs)

def getLoggerWhichLogsInfoMessagesAtDebugLevel(name: str):
  original_logger = logging.getLogger(name)
  original_logger.__class__ = LoggerWhichLogsInfoMessagesAtDebugLevel
  return original_logger

app.logger.__class__ = LoggerWhichLogsInfoMessagesAtDebugLevel
app.logger.critical("critical")
app.logger.error("error")
app.logger.warning("warning")
app.logger.info("info")
app.logger.debug("debug")
[2021-01-04 15:55:35,891] CRITICAL in __init__: critical
[2021-01-04 15:55:35,891] ERROR in __init__: error
[2021-01-04 15:55:35,891] WARNING in __init__: warning
[2021-01-04 15:55:35,891] DEBUG in __init__: info
[2021-01-04 15:55:35,891] DEBUG in __init__: debug
Forest Johnson
@ForestJohnson
ah the plot thickens, it does work, just not for the log messages I care about
Alex Grönholm
@agronholm
apscheduler logs plenty on the DEBUG level
how have you configured logging on your app?
Forest Johnson
@ForestJohnson
I am using flask with logging dict config
logs work fine, its just that I have one specific task which runs every 5 seconds
I would like to avoid those logs being printed all the time. (currently we run our application in INFO log level in production)
I am making progress. I think i have it almost figured out
Alex Grönholm
@agronholm
maybe add a filter?
Forest Johnson
@ForestJohnson
sounds like something tthat someone who knows how to use python would do
D
:D
Forest Johnson
@ForestJohnson
ah. I thought filters could only remove logs, which is not what I want, I want to modify their log level

but it says

filter(record)

... If deemed appropriate, the record may be modified in-place by this method.

Alex Grönholm
@agronholm
why not just filter out that noisy job?
Forest Johnson
@ForestJohnson
Because I might want to access those logs later by turning on DEBUG logging temporarily
I have a silly solution that works, obviously a filter is the proper way. So I will try that :)
Forest Johnson
@ForestJohnson
well crumbs. actually the filter appears to not be able to modify the log level
but return False does filter them out
maybe I will simply return False if the current configured log level is not DEBUG
Forest Johnson
@ForestJohnson
thanks Alex Cheers!!!! :beers:
Alex Grönholm
@agronholm
:thumbsup:
cecemel
@cecemel
Hello, I encountered a "skipped: maximum number of running instances reached". The job should still be running , is there any way I can get information about this specific job (e.g. how long it has been running, what is was doing if there are any other things going on)
Alex Grönholm
@agronholm
not via the apscheduler api, sorry
you can log these things in the target function itself of course
cecemel
@cecemel
ok thanks!
It's weird state we ended up in, because normally this job is ultra stable (ran before for months). But anyway... Thanks. Looking forward to the timeout feature in APscheduler 4.0, no pressure :-)
cecemel
@cecemel
Just an extra question: We use the BackgroundScheduler, the way it locks job, I assume it is fully in memory? There is no lockfile or anything created that might be a 'residu' when the whole service running the scheduler restarts? (its a docker based service, if a jobs crash on unexpected exception, the service crashes and gets restarted automatically)
Alex Grönholm
@agronholm
no need for lock files as everything is memory (unless you use a persistent job store)
@cecemel
Mirkenan Kazımzade
@Kenan7
any way to control cron jobs? if it executed or not, if not executed retry
Alex Grönholm
@agronholm
@Kenan7 you can do that yourself with a try...except loop
maybe use a library like tenacity to do progressive back-off
Mirkenan Kazımzade
@Kenan7
where do I put try except? it's a cron job added by add_job method
Mirkenan Kazımzade
@Kenan7
@agronholm
Alex Grönholm
@agronholm
@Kenan7 in the function you scheduled
José Flores
@FloresFactorB_twitter
hello, somene know how get the failed jobs?
Alex Grönholm
@agronholm
@FloresFactorB_twitter I assume you mean you want to get notified when a job fails (this can happen any number of times depending on how many times the job is run)
add a listener to the scheduler
José Flores
@FloresFactorB_twitter
Basically I want to do something similar to what this endpoint does but to get me the jobs that failed and if they will run again. How can I achieve that in the simplest way?
@app.get("/schedule/show_schedules/",response_model=CurrentScheduledJobsResponse,tags=["schedule"])
async def get_scheduled_syncs():
    """
    Will provide a list of currently Scheduled Tasks

    """
    schedules = []
    for job in Schedule.get_jobs():
        schedules.append({"job_id": str(job.id), "run_frequency": str(job.trigger), "next_run": str(job.next_run_time)})
    return {"jobs":schedules}
Alex Grönholm
@agronholm
@FloresFactorB_twitter what do you mean by a failed job?