Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 29 20:06
    github-actions[bot] commented #4204
  • Nov 29 20:06

    MrAnno on master

    make-cmake: Fixed riemann-clien… make-cmake: Added module enable… Merge pull request #4229 from H… (compare)

  • Nov 29 20:06
    MrAnno closed #4229
  • Nov 29 19:24
    HofiOne synchronize #4229
  • Nov 29 17:03
    HofiOne synchronize #4229
  • Nov 29 16:59
    bazsi synchronize #4228
  • Nov 29 16:44
    HofiOne edited #4229
  • Nov 29 16:07
    MrAnno edited #4209
  • Nov 29 14:39
    HofiOne synchronize #4229
  • Nov 29 13:46
    HofiOne synchronize #4229
  • Nov 29 13:10
    lokeshrajai starred syslog-ng/syslog-ng
  • Nov 29 09:59
    bazsi edited #4228
  • Nov 29 09:58
    bazsi edited #4228
  • Nov 29 09:58
    bazsi synchronize #4228
  • Nov 29 08:00
    HofiOne synchronize #4229
  • Nov 29 07:21
    HofiOne edited #4229
  • Nov 28 21:39
    HofiOne edited #4229
  • Nov 28 21:39
    HofiOne edited #4229
  • Nov 28 21:37
    github-actions[bot] commented #4229
  • Nov 28 21:37
    HofiOne opened #4229
Balazs Scheidler
@bazsi
hope this helps.
arteta22000
@arteta22000
Thank you, it seems complex for me. I preferred the solution of aggregating cid/mid/dcid and then aggregating the whole.
Currently; we do this with a python script and redis, but we have performance problems.
But I can't do it with grouping-by ? with several grouping-by subsequent
Balazs Scheidler
@bazsi
you are right. that could work.
I didn't think of that.
arteta22000
@arteta22000
How do I do this? I can't fowardthe aggregation from one grouping-by to another.
Balazs Scheidler
@bazsi
if you performed grouping-by()s separately: 1) one context CID stuff (and the Start CID/MID message), 2) one for the MID based messages plus the Start CID/MID message), then both contexts would have both MID/CID information. Then a 2nd layer of grouping-by() could correlate the aggregation of the 1st and the 2nd.
exactly.
good idea on using two. I was just too focused on doing it in one step.
arteta22000
@arteta22000
But how to send the aggregation from one grouping-by to another? I have the feeling that "grouping-by" works by default with the "message" macro?
example:

parser groupingby {
grouping-by(
key("${cid_id}")
scope("HOST")
value("event.aggregate" "ok")
value("MESSAGE" "Session completed >> ${attachment} cid_id:${cid_id} info1: ${info1}}")
inherit-mode("context")
)
timeout(10)
inject-mode("pass-through")
);
};

parser groupingby2 {
grouping-by(
key("${mid_id}")
scope("HOST")
trigger( "${.classifier.rule_id}" eq "regex_finished_status" )
having( "${finished_status}" eq "done" )
aggregate(
value("event.aggregate" "ok")
value("MESSAGE" "Session completed >> ${attachment} mid_id:${mid_id} cip_icid: ${cip_icid} ironport_interface_name: ${ironport_interface_name} cid_id:${cid_id} info1: ${info1}}")
inherit-mode("context")
)
timeout(10)
inject-mode("pass-through")
);
};

arteta22000
@arteta22000
With "inject-mode("internal")" in the first grouping-by ?
Balazs Scheidler
@bazsi
inject-mode(internal) would generate the message as if it was coming from the internal() source
I think inject-mode(passthrough) is better, that way the message is generated as if coming from grouping-by() itself.
arteta22000
@arteta22000

I'm going to do some tests but I'm a bit lost,)

My first "grouping-by" will generate a macro (that I configure; in the value parameter) with the aggregation (example: mid :xxx cid:xxx source_smtp: xxxx).

How to forward this information (cid:xxx source_smtp: xxxx) to the other grouping-by (after) so that it aggregates this information with the central mid
Balazs Scheidler
@bazsi
just connect the two grouping-by()s in a log path, e.g.
log { source(whatever); parser { grouping-by(FIRST...); grouping-by(SECOND...); }; ... };
you can also define it as a top-level parser block and then reference it:
parser p_ironport {
    grouping-by(FIRST...);
   grouping-by(SECOND...);
};

log { source(whatever); parser(p_ironport); ... };
any element can be concatenated to another one, each would be processed in order.
NOTE: in the first example I used braces to indicate that the parsers are defined in-line, within the log statement. In the 2nd example, I used parenthesis which references a parser that was declared earlier.
so parser { in-line-parser-expression }; or parser(namedparserblock);
arteta22000
@arteta22000
I tried, and I got 2 aggregated logs in output. I need to put "value("MESSAGE")" in the first grouping-by block?
The second grouping-by will see the message aggregated using the message macro ?
Balazs Scheidler
@bazsi
I don't really understand the question. I'd try something like this (not syntax checked, just from the top-of-my-head):

log {
    source(whatever);
   parser { db-parser(file('ironport.xml')); };
    log {
        filter { <match only iron port logs...>; };

        if ('${CID}' ne '') {
            parser { grouping-by(<CID aggregation options> inject-mode(passthrough)); };
        };
        if ('${MID}' ne '') {
            parser { grouping-by(<MID aggregation options> inject-mode(passthrough)); };
        };
        if (<match aggregated message from either CID or MID based aggregation>) {
            parser { grouping-by(<MID + CID result aggregation> inject-mode(passthrough)); };
        };
       filter { <match only aggregated logs emitted by the last grouping-by()>); };
    };
    destination(whatever);
};
Balazs Scheidler
@bazsi
the messages flow through the sequence one by one.
1) The source() statement in the front pulls in any messages received by the specific source.
2) the db-parser() right next would extract ironport name-value pairs from the messages
3) the embedded log {} statement just encapsulates a set of operations
4) the first filter embedded in the log {} statement would drop anything but ironport (I am not sore if the db-parser() field extraction sets a specific field or not, but that's certainly doable based on some criteria)
5) the first if() checks if $CID is set, if it is, a grouping-by() parser is executed, which should in turn emit an aggregated result of all messages where CID was set.
6) the second if() checks if $MID is set, similarly to the first, it would need to aggregate the $MID related messages into a result
7) the last if() would match messages that either the first or the second grouping-by() emitted and runs a 3rd grouping-by()
8) this third grouping-by() is our end result
9) the filter as the last statement within our log {} would drop anything but the aggregated result
10) the destination only receives the output of the entire log {} statement, e.g. the result of the 3rd aggregation
Balazs Scheidler
@bazsi
now as I think of it, the sample can even be improved so that the MID based grouping-by (e.g. the 2nd aggregation) does not receive the output of the first one. The two aggregation both get a copy of the incoming data, and their combined output is fed through the 3rd aggregation
but I leave you to it, this is just improvement in performance, functionally it should work just like that.
Russell Fulton
@rful011

I am having problems with getting shared libraries to load. I am running rsyslog (corporate standard with a config I can't change) and syslog-ng (for my stuff) on an ubuntu 20.04 system
I have installed syslog-ng from the "unofficial" packages in /usr/local and fiddled with the systemd config to get the libraries loaded and it was all working.

now something has changed and I get the error:

Jan 18 08:01:45 secmgrprd01 syslog-ng[1831033]: Error parsing source statement, source plugin network not found in /etc/syslog-ng/conf.d/eset.conf:2:9-2:16:
Jan 18 08:01:45 secmgrprd01 syslog-ng[1831033]: 1       source s_eset {
Jan 18 08:01:45 secmgrprd01 syslog-ng[1831033]: 2----->         network(transport("tcp") port(5514) keep-alive(yes) max_connections(2));
Jan 18 08:01:45 secmgrprd01 syslog-ng[1831033]: 2----->         ^^^^^^^
Jan 18 08:01:45 secmgrprd01 syslog-ng[1831033]: 3           };

I assume the issue is that syslog-ng is not finding the library with the source network plugin.
I have these vars set in /etc/default/syslog-ng

SYSLOGNG_OPTS="--control /var/lib/syslog-sec/syslog-ng.ctl --module-path /usr/local/lib/syslog-ng/3.31 --persist-file /var/lib/syslog-sec/syslog-ng.persist --pidfile /var/lib/syslog-sec/syslog-ng.pid"
LD_LIBRARY_PATH="/usr/local/lib/syslog-ng"

Any thoughts on what might be wrong?

Balazs Scheidler
@bazsi
Does it run in an interactive shell? What does syslog-ng --module-registry tell you with that module-path argument?
Russell Fulton
@rful011
yes, done that and it produces lots of stuff -- what should I look for?
Ah! this may be it: Error opening plugin module; module='afsocket', error='libnet.so.1: cannot open shared object file: No such file or directory' The affile module provides file source & destination support for syslog-ng.
Russell Fulton
@rful011
BTW I have figured out what "changed".
Russell Fulton
@rful011
THe system was rebooted and immediately after that it started to complain that it could not find libivykis.so.0. I fixed that about a week ago (can't remember how and history is not helpful). Then it started loading the config and barfing on the network plugin. THat was a week ago and I have just got back to it.
Balazs Scheidler
@bazsi
libnet is needed for spoof-source support, it is not part of ivykis
Russell Fulton
@rful011
solved this one -- libnet was missing The tricky bit was figuring out that the package is called libnet1.
I think what happened was that in my attempts to get syslog-ng and rsyslog to coexist on an ubuntu system I had installed and uninstall both at least twice. I ended up installing the package from open SUSE and moving stuff around in the install directories to stop ubuntu deleting them.
Russell Fulton
@rful011
I got it working but then out "auto patcher" ran and the system rebooted. I suspect something in that process spotted that libnet had been installed as a dependency of a package that was no longer installed and removed it
I ended up this morning of going back to scratch and building a package from source that installed in /usr/local/ bin (this time with systemd support which is what my original one lacked). That runs fine so long as I include /usr/local/lib in LD_LIBRARY_PATH
Russell Fulton
@rful011
Oh dear. My locally compiled version starts fine but fails to pass any data to a program destination. The program starts and reads STDIN but never gets anything. The syslog daemon eventually terminates with a Timeout error -- presumably on the write to the pipe?
any. ideas on debugging this?
Balazs Scheidler
@bazsi
Can you show the error message? I havent seen anything like this, unless its some kind of dependency issue. Btw have you tried to compile it using our docker based build infrastructure? Or even getting the binary from github? The packaging files can be customized if the ones we have dont match your use case. Or if you want I can help in just producing the binaries in a tarball.
Russell Fulton
@rful011
Thanks Balazs! I don't have any experience with docker :(
here is the log from syslog-ng:
Jan 22 08:29:04 secmgrprd01 syslog-ng[2218448]: syslog-ng starting up; version='3.35.1'
Jan 22 08:30:34 secmgrprd01 syslog-ng[2218448]: syslog-ng shutting down; version='3.35.1'
Jan 22 08:30:34 secmgrprd01 syslog-ng[2218448]: Child program exited, restarting; cmdline='/usr/local/tools/dev/siem_logging/bin/syslog-ng-es.rb -vv --debug source --user sensors --source loghost', status='15'
Russell Fulton
@rful011
I will come back to this with more info -- got to run right
now
Ashish Tiwari
@Ashish-100-tiwari
hello ! .I am ashish tiwari , a computer science undergrad. I have just entered my second year of my college . I am new to open source contributions but I am well aware of C , C++ , Python , Javascript , Html , CSS and SQL.
can any one guide me how to get started?
Regards
Ashish
Balazs Scheidler
@bazsi
@Ashish-100-tiwari try cloning the source code from github and prepare a work environment by building the code yourself. The easiest way is probably using dbld, a docker based build system.