Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 14 18:11
    codecov[bot] commented #319
  • Jun 14 18:11
    codecov[bot] commented #318
  • Jun 14 18:09
    codecov[bot] commented #319
  • Jun 14 18:09
    codecov[bot] commented #318
  • Jun 14 18:09
    codecov[bot] commented #265
  • Jun 14 18:03
    scala-steward opened #319
  • Jun 14 18:03
    scala-steward opened #318
  • Jun 14 18:03
    codecov[bot] commented #265
  • Jun 14 18:02
    scala-steward synchronize #265
  • Jun 14 17:12
    sergeykolbasov commented #271
  • Jun 14 17:03

    github-actions[bot] on master

    Update documentation skip-check… (compare)

  • Jun 14 16:59

    sergeykolbasov on v0.12.0

    (compare)

  • Jun 14 16:59

    sergeykolbasov on master

    fix docs (compare)

  • Jun 14 16:37

    github-actions[bot] on master

    Update documentation skip-check… (compare)

  • Jun 14 16:35

    sergeykolbasov on v0.12.0

    (compare)

  • Jun 14 16:25
    sergeykolbasov closed #311
  • Jun 14 16:25
    sergeykolbasov closed #304
  • Jun 14 16:25
    sergeykolbasov closed #303
  • Jun 14 16:25
    sergeykolbasov closed #302
  • Jun 14 16:25
    sergeykolbasov closed #301
Charles Dabadie
@CharlesD91
We want to replace the rolling files mechanism of Logback with Odin
Sergey Kolbasov
@sergeykolbasov
Okay, I'll see what I can do during the week
Charles Dabadie
@CharlesD91
Thank you :)
Charles Dabadie
@CharlesD91
Hi @sergeykolbasov , how are you ?
I have an instance of Resource[F, Logger[F]] and I would like to send the SLF4J logs towards it
I read https://github.com/valskalla/odin#slf4j-bridge but it uses a Logger[F]. Any guidance on how to do this ?
Sergey Kolbasov
@sergeykolbasov
@CharlesD91 Usually, bridge exists apart from the application, so it's totally fine to have F =:= IO or Task and just unsafe run it
Resource has allocated method that returns F[(A, F[Unit])], tuple of acquired resource and release function
so, something like val logger: Logger[IO] = resourceLogger.allocated._1.unsafeRunSync() should be just fine
Charles Dabadie
@CharlesD91
Thank you for your answer. I ended up using a var in StaticLoggerBinder that I update once my Resource is allocated. Until the var is not set I return consoleLogger[Task](). It is not very Scala-ish but I guess we have to talk to SLF4J in a Java way. In the end it works great !
Sergey Kolbasov
@sergeykolbasov
Mind that Resource.use(...).unsafeRunSync() on resource will also release the underlying logger resources, so be careful with that
Sergey Kolbasov
@sergeykolbasov
@CharlesD91 check out the 0.7.0 release, it has the rolling file logger :smile:
Charles Dabadie
@CharlesD91
Hi @sergeykolbasov , cool thank you very much for your time :)
Charles Dabadie
@CharlesD91
Hello @sergeykolbasov , how are you doing ?
Sergey Kolbasov
@sergeykolbasov
light flu, otherwise I'm fine
and you?
Charles Dabadie
@CharlesD91
I would like to propose a PR on a second type of RollingFileLogger. The file where the logs are written would be fixed and there would be an archiver function File => File that would transform the fixed log file into an archive (it would for instance rename it to include a date + zip it).
The signature of this second type of RollingFileLogger would be: def apply[F[_]]( fixedLogFile: File, archiver: File => File, maxFileSizeInBytes: Option[Long], rolloverInterval: Option[FiniteDuration], formatter: Formatter, minLevel: Level )
What do you think ?
Ah sorry to hear that ... I'm good !
Sergey Kolbasov
@sergeykolbasov
So you're looking for a compression functionality once the size or interval is reached?
Charles Dabadie
@CharlesD91
Yes. And also that the log file that contains the latest log stays the same between rotations
Sergey Kolbasov
@sergeykolbasov
Well, the latter is achievable even with LocalDateTime => File signature, you just ignore the date-time part. With file interpolator it should be easy
Anyway, it might be tricky with async nature. Say, you got a new log message during compression, what happens with original file?
Charles Dabadie
@CharlesD91
Hmm it seems to me that right now if you ignore the date-time part of LocalDateTime => File you end up with a single file that is never rolled
Sergey Kolbasov
@sergeykolbasov
Yeah, it will truncate it atm
It uses default options from FileSystemProvider which are
Set.of(StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING,
            StandardOpenOption.WRITE);
But isn't it what you're looking for?
  • Compress current log file on the trigger
  • Truncate the log file
  • Write to the same log file until the trigger
  • Repeat
Charles Dabadie
@CharlesD91
If the compressed log files have different names containing different dates and are kept under a fixed number of files like 30, then yes it works for me
Sergey Kolbasov
@sergeykolbasov
Well, the trick here is async part, like I said. When you start the compression log messages still might arrive, so between step 1 and 2 those logs should be stored somewhere
Charles Dabadie
@CharlesD91
I agree
We have our own custom implementation of a RolligFileLogger that archives and we use a Semaphore to block concurrent access
On top of it we use your .withAsync, so that the log is not felt
Sergey Kolbasov
@sergeykolbasov

Ah, I see

It's also one possible solution. Other two I have in mind are temporary log file and in-memory queue

Latter is simulated with async logger, indeed
Charles Dabadie
@CharlesD91
For the first option you would still have to block at some point when you have to repatriate the logs from the temporary log file to the fixed main one ?
Sergey Kolbasov
@sergeykolbasov
yeah, true

I'd be glad to review PR with this functionality in case if you wish to make it :smile:

I wonder if it's possible to generalize it so it would play nicely with the current implementation, but I'm open-minded anyway

Charles Dabadie
@CharlesD91
Cool :)
Last thing, in our custom implementation we are cheating a bit and we are using fs2.Stream to simplify the periodical tasks
Is it a dependency you would consider adding to the Odin project ?
Sergey Kolbasov
@sergeykolbasov
What's up with Timer for periodical tasks?
Odin uses it in async logger i.e.
Charles Dabadie
@CharlesD91
fs2.Stream basically handles the infinite recursive Timer loop for you. But it is a minor optimization, and I could use Timer directly
also it handles the stopping of the loop nicely via a SignallingRef
Sergey Kolbasov
@sergeykolbasov

Resource with Fiber cancellation works as well, I think it's almost single liner or smth

AsyncLogger example:

Resource
      .make {
        for {
          queue <- ConcurrentQueue.withConfig[F, LoggerMessage](queueCapacity, ChannelType.MPSC)
          logger = AsyncLogger(queue, timeWindow, inner)
          fiber <- logger.runF //it returns a fiber with inf timer loop
        } yield {
          (fiber, logger)
        }
      } {
        case (fiber, _) => fiber.cancel //on resource release we should cancel the fiber
      }
      .map {
        case (_, logger) => logger
      }
Charles Dabadie
@CharlesD91
Ok cool, I will use this. Thank you !
Sergey Kolbasov
@sergeykolbasov
Thanks!
Charles Dabadie
@CharlesD91
Hi @sergeykolbasov !
How are you these days ?