Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Darin Gordon
    @Dowwie
    hello? Russell?
    @rcoh ping!
    Russell Cohen
    @rcoh
    hello @Dowwie !
    Sorry, you're the first person to ever use this
    I'll be active and check in here periodically in the future
    Darin Gordon
    @Dowwie
    @rcoh hey your email address on github doesn't work (return to sender)
    @rcoh what need is anglegrinder addressing that isn't addressed by log parsing, alerting, and dashboards like grafana? to me, anglegrinder seems like a nice simpler solution to monitoring -- if it can help with reporting AND alerting, though
    curious how you're using it..
    Russell Cohen
    @rcoh
    @Dowwie angle-grinder is useful in a couple of cases (for me, I'm not the only user anymore!)
    • if you're in an env that doesn't have an ELK(G) stack or you want realtime metrics from a single server angle-grinder is great. Even with clients that have an elk stack, sometimes I'll deploy deploy to a canary, than ssh into server and do something like agrind | json | where level == "error" | count by message
    • if you have a text file of data, angle-grinder is a quick way to do some analytics on it
    • There's no alerting or even graphing right now -- it's just a simple CLI tool
    If you really wanted to, you could plug angle grinder into some sort of alerting framework
    Russell Cohen
    @rcoh
    @Dowwie what public email is visible? I just tested rcoh@rcoh.me and it worked (although I occasionally mess up the MX records)
    Darin Gordon
    @Dowwie
    @rcoh I see the problem. I tried .com rather than .me
    Michael Smith
    @lasthemy_twitter
    @rcoh I'm interested in extending angle-grinder to be a fairly simple log reformatter. I was thinking of adding support for logfmt (https://www.brandur.org/logfmt via https://github.com/brandur/logfmt) alongside json, and also something like a printf non-aggregate operator. Does that direction seem useful?
    Russell Cohen
    @rcoh
    That would be cool!
    Not sure exactly what you mean by "printf non aggregate operator" -- be able to specify a structured output format is definitely on the list as well eg. JSON or logfmt @lasthemy_twitter let me know if that's the same
    Michael Smith
    @lasthemy_twitter
    I was thinking of taking structured logs (json, logfmt) and outputting something more readable like some of the examples from brandur.org
    info | Stopping all fetchers          module=kafka.consumer.ConsumerFetcherManager
    info | Performing log compaction      module=kafka.compacter.LogCompactionManager
    Tim Stack
    @tstack
    looks like nom 5.0 is out now, are there plans to switch over?
    Russell Cohen
    @rcoh
    @tstack seems like it should be fairly compatible (and not use macros generally anymore! yay!) could potentially make the parser much easier to modify
    the main change that will hurt is the removal of CompleteStr
    and possibly the changes made to verbose-errors
    after a big fight with nom macros to add support for a more powerful filter, being able to use regular functions seems attractive
    Russell Cohen
    @rcoh
    @lasthemy_twitter that format is sort of like what agrind tries to do -- I'd be open to using that sort of rendering generally for non-aggregate data
    currently it looks like this:
    [dpt=53]         [dst=10.234.99.66]        [proto=UDP]          [spt=37071]        [src=10.234.77.0]
    [dpt=8485]       [dst=10.234.99.75]        [proto=TCP]          [spt=45994]        [src=10.234.66.192]
    [dpt=53]         [dst=10.234.99.66]        [proto=UDP]          [spt=59727]        [src=10.234.77.0]
    [dpt=53]         [dst=10.234.99.66]        [proto=UDP]          [spt=59727]        [src=10.234.77.0]
    [dpt=53]         [dst=10.234.99.66]        [proto=UDP]          [spt=41732]        [src=10.234.120.192]
    but that's a lot more visually noisy than the logfmt version
    Michael Smith
    @lasthemy_twitter
    Ok. I may get started on something in the next week.
    Russell Cohen
    @rcoh
    @tstack starting to dig into the nom upgrade: https://github.com/rcoh/angle-grinder/compare/nom5-upgrade?expand=1
    I'll probably do it by rewriting the parser function by function, and then finally getting the errors sorted out again.
    Getting the errors replaced is going to be blocked on nom_locate supporting nom 5.0 which may take a bit
    Michael Smith
    @lasthemy_twitter
    I'm trying to figure out how to structure the output change I have in mind. It could be a flag with argument like
    --format '{level} | {30%msg} module={module}'
    that modifies the renderer (but that ties the flag content really closely to the content parsing), it could be an inline transform of some sort
    * | json | "{level} | {30%msg} module={module}"
    or I could try to pretty up the default renderer (this one I'm unclear on how to do that generically).
    Michael Smith
    @lasthemy_twitter
    I think I'm going to start with a new flag because the operator doesn't seem to fit any of the existing ones. All the non-aggregate operators emit structured rows, all the aggregate operators combine multiple rows.
    Tim Stack
    @tstack
    this "attackdefense" site has a lab that uses angle-grinder -- https://public.attackdefense.com/challengedetails?cid=1183
    unfortunately, you have to subscribe to get access I guess
    Tim Stack
    @tstack
    is there any interest in JIT compiling the query language using something like cranelift?
    Russell Cohen
    @rcoh
    hah that would be pretty wild!
    I mean it could be a fun experiment...and it might potentially run really fast
    I would trade off the performance vs. making it much harder to write new operators
    Tim Stack
    @tstack

    I think the operators themselves would still be written in Rust. The generated code would be piecing things together and executing expressions. It's more about changing the execution model from interpreting internal data structures to generated code that just does what needs to happen. So, in a pipeline like this:

    * | json | concat(substring(foo, 3), "bar") as baz

    The generated code would directly do the query pattern, call the JSON operator, and generate the "baz" column. So, we eliminate the work of walking over the pipeline data structure and doing the dispatching. (Really, though, I don't have a good feel of how much of a win that would be). I think the bigger win is in expressions. Not having to walk over the AST and do dispatching can help a lot, especially if there are a lot of rows. I do have experience with something like that and there was a significant win transitioning from an AST walk to a JIT model.