Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 13:36
    Kokan closed #4100
  • 13:36

    Kokan on master

    template: fix assertion failure… Merge pull request #4101 from b… (compare)

  • 13:36
    Kokan closed #4101
  • 11:04

    OverOrion on master

    secure-logging: fix possible in… Merge pull request #4102 from M… (compare)

  • 11:04
    OverOrion closed #4102
  • 10:37
    OverOrion review_requested #4102
  • 10:35
    github-actions[bot] commented #4102
  • 10:35
    MrAnno edited #4102
  • 10:35
    MrAnno opened #4102
  • Aug 11 18:45
    ffontaine synchronize #4081
  • Aug 11 13:32
    ffontaine synchronize #4081
  • Aug 11 09:13
    Kokan commented #4101
  • Aug 10 15:22
    bazsi commented #4101
  • Aug 10 08:39
    Kokan commented #4101
  • Aug 09 17:41
    naltun starred syslog-ng/syslog-ng
  • Aug 09 14:51
    jorikseldeslachts starred syslog-ng/syslog-ng
  • Aug 09 08:35
    mohitvaid commented #4013
  • Aug 08 18:12
    kira-syslogng commented #4101
  • Aug 08 17:47
    github-actions[bot] commented #4101
  • Aug 08 17:47
    bazsi opened #4101
László Szemere
@szemere

My assumption is, that someone probably want to keep sending messages via other processes, while one of them is blocked. (Not viable using round-robin balancing.)

(In the meantime: My earlier statement was incorrect, acknowledgement was not an issue in case of program destinations, since syslog-ng consider the message acknowledged if the write on the pipe was successfull.)

Fabien Wernli
@faxm0dem
That being said, it could probably make sense to add the round-robin functionality externally as part of a configuration keyword so that it could be used for multiple destinations:
log {
  load-balance(key('HOST'), forks(5), destination(d_myprogram));
};
would be neat, huh ?
László Szemere
@szemere
I like it! @alltilla what do you think?
Fabien Wernli
@faxm0dem
or maybe max-forks(5)
so that it spawns as many forks as there are HOST, with a limit of 5
maybe pre-fork(2)
you get the idea :-D
László Szemere
@szemere
Sorry for not answering, we are discussing your idea IRL with @alltilla . We have some interesting ideas. (Thinking about a more general solution, so not only program destination will benefit from it.)
László Szemere
@szemere
I open a github issue for the topic, where we can track the upcoming ideas.
Peter Czanik
@czanik
Yeah, sending logs over multiple TCP connections would also cool
like recent changes in the MongoDB destination made significant speed ups even without bulk mode
László Szemere
@szemere
The mentioned GitHub issue: syslog-ng/syslog-ng#3692 please feel free to comment on it, if I missed something.
Fabien Wernli
@faxm0dem
awesome, thanks !
Homesh
@Homeshjoshi_twitter
Hi @alltilla, do you have any update for me regarding my requirement of "logstash read mode" kind of feature in syslog-ng, the workaround of using script ( I tested script ) is not very efficient as for every file I have to check with syslog-ng-ctl and for a file which is not yet process by syslog (e.g condition when max-file limit reached) syslog-ng-ctl returns no information so I have to further check if syslog-ng is running. This method is not efficient when you have thousands of logs to process, BTW i am very happy with the syslog-ng performance compared to logstash, syslog-ng is consuming almost nothing (40 MB) memory compared to that of logstash it is 1GB , also since syslog-ng is written is c cpu usage is normal
Attila Szakacs
@alltilla
Hi @Homesh, Thanks for reminding me! We had 1 meeting, where we talked about this feature request, but we did not reach a conclusion. To make it more transparent, and to make sure that it does not get forgotten, I have made a GitHub issue for it: syslog-ng/syslog-ng#3695
Homesh
@Homeshjoshi_twitter
Thanks @alltilla
Homesh
@Homeshjoshi_twitter
I am working on another workaround. Idea is since only the syslog-ng will access the files, I am using inotifywait -m -r -e close_write --format '%w%f' "${MONITORDIR}" | while read NEWFILE
do
rm ${NEWFILE}
done
This way once the file is read by syslog-ng it is getting deleted. here i am assuming that the file is process by syslog-ng ( which is the case for most of the time) I see all the files reaching my elasticsearch. Howerver I see below entries in log 2021-06-08T11:32:14.259750] Follow-mode file source not found, deferring open; filename='/var/log/apache2/93200/20210608/20210608-1702/20210608-170214-YL9VPvA4UO9fAg2IkuODWQAAAAI' is there any issue due to this ? i expect this will remove the file from monitoring ( hence keep the number of files well under the max-file count)
Second I see empty directories (due to above mentioned script) e.g /var/log/apache2/93200/20210608/20210608-1702 will this cause issue for syslog-ng ( as it will still monitor these directories for possible new file, I think I should also delete the empty directories immediately. Please suggest
Homesh
@Homeshjoshi_twitter
Hi @alltilla can you please suggest.
László Várady
@MrAnno

Hi @Homeshjoshi_twitter,

If I understand it correctly, the inotifywait -m -r -e close_writescript of yours will remove the file right after a process writing that file closes its fd.

Fortunately, when syslog-ng opens a file for reading, it keeps that file open. This means that if the file is deleted before it is fully processed by syslog-ng, we can still finish reading the file (The file will not really be deleted until the last FD is closed).

The only problematic scenario is when a file is deleted, but syslog-ng has not yet been able to open the file in that amount of time.
I think this is the case when you get the message you mentioned: Follow-mode file source not found

László Várady
@MrAnno
The wildcard-file() source monitors files continuously, we don't close those files after processing them, so I think you either have to add a few-second delay before removing the file (in the hope syslog-ng can open it), or you should choose a different approach (a simple delay does not guarantee anything, but monitoring syslog-ng statistics (syslog-ng-ctl stats), or searching for internal debug logs that state we opened the file might actually work)
Homesh
@Homeshjoshi_twitter
Hi @MrAnno , thanks for the reply. My logic is once the file is read by syslog-ng it should get deleted, since no new log is going to be written by application in that file (due to concurrent logging). What I finally understand from your reply that my script is not going to work. I can not keep debug ON for production to check for files process by syslog-ng. My only option left is to use elasticsearch's filebeat as it can delete the file once it is process like logstash ( filebeat is lighter than that of logstash) and then filebeat will write the logs to a new single file or it can forward to syslog-ng on syslog port (e.g 514), I do not wanted to add dependency on filebeat or any other program, but now I don't have option until syslog-ng will offer the same feature of filebeat or logstash. Logstash hangs after 7 to 8 days and consumes lot of memory and cpu. That is why I want to replac logstash with syslog-ng. Thanks again for your reply.
László Várady
@MrAnno

@Homeshjoshi_twitter

What I finally understand from your reply that my script is not going to work.

Adding a few-second delay might work, but it's an ugly hack, and nothing is guaranteed.

The better option (until we implement #3695) would be something that relies on the statistics of syslog-ng.
For example, it is possible to write a script that periodically checks the output of the sbin/syslog-ng-ctl query get 'src.file.*.processed' command, and removes the file only when the processed counter of the file can be found in the list, and it is greater than 0.

This should work reliably only if you do NOT re-create the same file (path) after deleting it (syslog-ng file statistics are not reset after the file is removed).

Homesh
@Homeshjoshi_twitter
Interesting !! let me try this and share my experience. Thanks a lot @MrAnno
László Várady
@MrAnno
My pleasure
arekm
@arekm:matrix.org
[m]
Hi, using log { source(s_sys); destination(log_net); }; to log all logs into remote syslog. But that makes "log { source(s_sys); destination(d_messages); flags(fallback); };" not logging things locally due to fallback flag. Is there a way to mark my log_net logging somehow, so it doesn't interfere with fallback logging?
László Várady
@MrAnno

@arekm:matrix.org Hi,
The fallback flag is for processing messages that are not processed by any other normal log paths. You can not mark log_net as an "invisible" destination, but I'm pretty sure we can refactor/rephrase your configuration to achieve what you want; for example, using if-else blocks, or embedded log paths, final flags, or just filters.

Can you share all of your log paths, where s_sys is used? Exactly what messages would you like to see arrive into d_messages?

arekm
@arekm:matrix.org
[m]
The goal here is to use default syslog-ng config and only put one file in syslog.d/ that will push all logs to remote server, too. if/else/final won't play well with such assumption
(one file in... == one additional configuration file)
László Várady
@MrAnno
Oh I see. Is this your own custom default config? (I checked our default configurations (DEB, RPM, Arch Linux packaging) and could not find this fallback path.)
arekm
@arekm:matrix.org
[m]
PLD distro config.. which I could change in pld itself, too actually. I'll see how syslog-ng upstream config looks like
mshah618
@mshah618
Hi All,
I'm new Here and want to explain an issue I'm facing with syslog-ng.
I'm communicating to a server through a client. The client is on syslog-ng-v3.16.1 and the server is on syslog-ng-v3.10.1
Now I'm seeing incorrect count values for suppressions messages on remote logs.
Please note that client is configured with suppress(600) and server is configured with suppress(0)
any suggestion is appreciable.
Balázs Barkó
@Barkodcz

Hi @mshah618 ,
what do you mean it's incorrect count values for suppression messages?
just to be clear, when you say remote logs you mean the server logs?

For my opinion, it is possible if you update syslog-ng on your server it will solve the problem.

mshah618
@mshah618
Thanks, @Barkodcz for responding.
remote logs = server logs
when a message is for client, it starts suppression and looks for repeated messages.
once this repeated message steak is over, it logs "<MSG> repeated N times" and that log is also stored in server.
Now the problem I'm facing is the "repeated N times" value of N is sometimes different in server logs.
Hope I explain the issue well.
So Is there any open issue present currently? Do you know if this type of issue occurred before and fixed in later releases?
Balázs Barkó
@Barkodcz
Hi @mshah618 , can you send me your config please?
It could help for debugging and understanding the problem.
mshah618
@mshah618
Hi @Barkodcz , Can you share me your email id? I'll share those configs to you.
László Várady
@MrAnno

Hi @mshah618,

Now the problem I'm facing is the "repeated N times" value of N is sometimes different in server logs.

Could you elaborate on this a bit more? Did you find some inconsistencies between the client and server logs when using the suppress option?
Or do you expect a fixed amount of repetition within 600 seconds?

When a client is configured with the suppress(T) option, consecutive repetitive messages will be sent only once within that T timeframe. Please note that any non-identical message between 2 identical messages will reset the suppress counter.
After T seconds or when a non-identical message is found, the following summary will be sent to the server:
Last message 'msg' repeated N times, suppressed by syslog-ng on ...

Can you share me your email id? I'll share those configs to you.

Could you share the relevant parts of your configuration publicly? More people can help you that way. :) You can remove sensitive information from the config.
If that's not possible, you can, of course, send your config to any of us in a private message.

mshah618
@mshah618

Hi @mshah618,

Now the problem I'm facing is the "repeated N times" value of N is sometimes different in server logs.

Could you elaborate on this a bit more? Did you find some inconsistencies between the client and server logs when using the suppress option?
Or do you expect a fixed amount of repetition within 600 seconds?

I see inconsistencies between client and server logs when using suppress option.

László Várady
@MrAnno
What exactly do you see? When suppressing logs on the client side, the server receives the summary message itself (Last message 'msg' repeated N times), so messages are not recounted on the server side, you will have to see the exact same message there.
mshah618
@mshah618
yes but I see different counts sometimes
not only on server side, but sometimes I see wrong count number on client side and correct count on server side

Some times server log contains wrong number of repeated messages, some times client log contains it.

Some times client and server logs show correct, equal value.

László Várady
@MrAnno
How do you validate which side has the "correct count"?
Do you have 2 different destinations (a local file and a network?) configured with the suppress() option on the client side?
mshah618
@mshah618
Yes
László Várady
@MrAnno
Every destination has its own suppress timer and counter, so the two will not always produce the same output.
For example, if one of the 2 destinations receives messages from other sources as well, the suppress output will be completely different.