Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • May 04 09:28
    codecov[bot] commented #290
  • May 04 09:23
    scala-steward closed #288
  • May 04 09:23
    scala-steward commented #288
  • May 04 09:23
    scala-steward opened #290
  • May 03 19:26
    codecov[bot] commented #289
  • May 03 19:25
    codecov[bot] commented #289
  • May 03 19:21
    scala-steward closed #248
  • May 03 19:21
    scala-steward commented #248
  • May 03 19:21
    scala-steward opened #289
  • May 02 23:25
    codecov[bot] commented #288
  • May 02 23:24
    codecov[bot] commented #288
  • May 02 23:20
    scala-steward closed #287
  • May 02 23:20
    scala-steward commented #287
  • May 02 23:20
    scala-steward opened #288
  • May 02 11:25
    codecov[bot] commented #287
  • May 02 11:20
    scala-steward closed #284
  • May 02 11:20
    scala-steward commented #284
  • May 02 11:20
    scala-steward opened #287
  • Apr 29 20:53
    codecov[bot] commented #286
  • Apr 29 20:52
    codecov[bot] commented #286
Sergey Kolbasov
@sergeykolbasov
light flu, otherwise I'm fine
and you?
Charles Dabadie
@CharlesD91
I would like to propose a PR on a second type of RollingFileLogger. The file where the logs are written would be fixed and there would be an archiver function File => File that would transform the fixed log file into an archive (it would for instance rename it to include a date + zip it).
The signature of this second type of RollingFileLogger would be: def apply[F[_]]( fixedLogFile: File, archiver: File => File, maxFileSizeInBytes: Option[Long], rolloverInterval: Option[FiniteDuration], formatter: Formatter, minLevel: Level )
What do you think ?
Ah sorry to hear that ... I'm good !
Sergey Kolbasov
@sergeykolbasov
So you're looking for a compression functionality once the size or interval is reached?
Charles Dabadie
@CharlesD91
Yes. And also that the log file that contains the latest log stays the same between rotations
Sergey Kolbasov
@sergeykolbasov
Well, the latter is achievable even with LocalDateTime => File signature, you just ignore the date-time part. With file interpolator it should be easy
Anyway, it might be tricky with async nature. Say, you got a new log message during compression, what happens with original file?
Charles Dabadie
@CharlesD91
Hmm it seems to me that right now if you ignore the date-time part of LocalDateTime => File you end up with a single file that is never rolled
Sergey Kolbasov
@sergeykolbasov
Yeah, it will truncate it atm
It uses default options from FileSystemProvider which are
Set.of(StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING,
            StandardOpenOption.WRITE);
But isn't it what you're looking for?
  • Compress current log file on the trigger
  • Truncate the log file
  • Write to the same log file until the trigger
  • Repeat
Charles Dabadie
@CharlesD91
If the compressed log files have different names containing different dates and are kept under a fixed number of files like 30, then yes it works for me
Sergey Kolbasov
@sergeykolbasov
Well, the trick here is async part, like I said. When you start the compression log messages still might arrive, so between step 1 and 2 those logs should be stored somewhere
Charles Dabadie
@CharlesD91
I agree
We have our own custom implementation of a RolligFileLogger that archives and we use a Semaphore to block concurrent access
On top of it we use your .withAsync, so that the log is not felt
Sergey Kolbasov
@sergeykolbasov

Ah, I see

It's also one possible solution. Other two I have in mind are temporary log file and in-memory queue

Latter is simulated with async logger, indeed
Charles Dabadie
@CharlesD91
For the first option you would still have to block at some point when you have to repatriate the logs from the temporary log file to the fixed main one ?
Sergey Kolbasov
@sergeykolbasov
yeah, true

I'd be glad to review PR with this functionality in case if you wish to make it :smile:

I wonder if it's possible to generalize it so it would play nicely with the current implementation, but I'm open-minded anyway

Charles Dabadie
@CharlesD91
Cool :)
Last thing, in our custom implementation we are cheating a bit and we are using fs2.Stream to simplify the periodical tasks
Is it a dependency you would consider adding to the Odin project ?
Sergey Kolbasov
@sergeykolbasov
What's up with Timer for periodical tasks?
Odin uses it in async logger i.e.
Charles Dabadie
@CharlesD91
fs2.Stream basically handles the infinite recursive Timer loop for you. But it is a minor optimization, and I could use Timer directly
also it handles the stopping of the loop nicely via a SignallingRef
Sergey Kolbasov
@sergeykolbasov

Resource with Fiber cancellation works as well, I think it's almost single liner or smth

AsyncLogger example:

Resource
      .make {
        for {
          queue <- ConcurrentQueue.withConfig[F, LoggerMessage](queueCapacity, ChannelType.MPSC)
          logger = AsyncLogger(queue, timeWindow, inner)
          fiber <- logger.runF //it returns a fiber with inf timer loop
        } yield {
          (fiber, logger)
        }
      } {
        case (fiber, _) => fiber.cancel //on resource release we should cancel the fiber
      }
      .map {
        case (_, logger) => logger
      }
Charles Dabadie
@CharlesD91
Ok cool, I will use this. Thank you !
Sergey Kolbasov
@sergeykolbasov
Thanks!
Charles Dabadie
@CharlesD91
Hi @sergeykolbasov !
How are you these days ?
It is a WIP for now, but the main logic is there. I will try to finish this next week.
Sergey Kolbasov
@sergeykolbasov

Hi @CharlesD91

Thanks for PR, I'll take a look on it tmrw/weekend

Diem
@bi0h4ck
Hi, I have Logger[F] in a trait WithLogger, a class A extends WithLogger. Is there any way that I can customize the log message to print out className A since by default, className that has Logger[F] which is WithLogger is printed out?
Sergey Kolbasov
@sergeykolbasov
Hi @bi0h4ck
How do you call logger itself? Do you call it directly? Position is derived during call of .debug/info/warn/etc as implicit parameter
Diem
@bi0h4ck

I didn’t call it directly. I have a function logAndReturn that takes error: SomeError and return SomeError. In this function, I call Logger[F].error to log error messages and then return SomeError.

So in the service layer for instance, when logAndReturn is called, className/functionName of WithLogger#logAndReturn is printed out. But I want the service className is printed out

Sergey Kolbasov
@sergeykolbasov
Okay, got it. Then I suggest to modify a signature of your wrapper function a bit to include an implicit parameter of type https://github.com/valskalla/odin/blob/master/core/src/main/scala/io/odin/meta/Position.scala
then it'll be propagated to logger call as implicit parameter as well
so it would be something like
def logAndReturn(e: SomeError)(implicit pos: io.odin.meta.Position): F[SomeError]

Essentially, in the case when implicit is missing it's automatically derived here:
https://github.com/valskalla/odin/blob/master/core/src/main/scala/io/odin/meta/Position.scala#L14-L24

using macro of sourcefile library

Which means that correct position will be derived in the place of invocation of your wrapper method
Diem
@bi0h4ck
I see. I was trying to modify Position in LoggerMessage but sourcecode.File gives me File.type so I couldn’t access value
Anyway, I will try out your suggestion. Thanks so much for the help.
Sergey Kolbasov
@sergeykolbasov
glad to help :)
Diem
@bi0h4ck

I modified the Position like this
object Position { implicit def derivePosition( implicit fileName: sourcecode.File, packageName: sourcecode.Pkg, line: sourcecode.Line ): Position = io.odin.meta.Position(fileName.value, fileName.value, packageName.value, line.value) } and I got the className printed out as I want.

Thank you so much :)

By the way, our team loves your awesome logging algebra Odin