syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
MrAnno on master
kubernetes: add warning for Api… Merge pull request #4305 from a… (compare)
HofiOne on master
systemd-journal: fix --with-sys… Merge pull request #4304 from H… (compare)
{
"took" : 3,
"errors" : true,
"items" : [
{
"index" : {
"_index" : "syslog-2021.05.03",
"_type" : "_doc",
"_id" : "3640ac93@0000000041530c64",
"status" : 400,
"error" : {
"type" : "validation_exception",
"reason" : "Validation Failed: 1: this action would add [24] total shards, but this cluster currently has [2999]/[3000] maximum shards open;"
}
}
}
]
}
I am planning to replace logstash with syslog-ng. I am looking for similar feature in syslog-ng which i am currently using in Logstash, With logstash “file input” there is a mode called as “read.”
In this mode the plugin treats each file as if it is content complete, that is, a finite stream of lines and now EOF is significant. A last delimiter is not needed because EOF means that the accumulated characters can be emitted as a line. Further, EOF here means that the file can be closed and put in the "unwatched" state - this automatically frees up space in the active window. This mode also makes it possible to process compressed files as they are content complete. Read mode also allows for an action to take place after processing the file completely.( e.g delete) refer https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-file_completed_action
I have gone through the https://marc.info/?l=syslog-ng&m=154148918830151&w=2 where some similar topic is discussed. and below config is proposed. ( which i don’t know exist but i feel it will surely work in my use case and also help syslog-ng keep its persist file small)
source s_file_clearup { wildcard-file ( base-dir("/tmp/") filename-pattern("*") remove-on-EOF(yes) ); };
My requirement is due to this in mod security logs
Concurrent Audit Log Initially, ModSecurity supported only the serial audit logging format. Concurrent logging was introduced to address two issues:
Serial logging is only adequate for moderate use, because only one audit log entry can be written at any one time. Serial logging is fast (logs are written at the end of every transaction, all in one go) but it does not scale well. In the extreme, a web server performing full transaction logging practically processes only one request at any one time.
Real-time audit log centralization requires individual audit log entries to be deleted once they are dealt with, which is impossible to do when all alerts are stored in a single file.
Concurrent audit logging changes the operation of ModSecurity in two aspects. To observe the changes, switch to concurrent logging without activating mlogc by changing SecAuditLogType to Concurrent (don’t forget to restart Apache). First, as expected, each audit log entry will be stored in a separate file. The files will not be created directly in the folder specified by SecAuditLogStorageDir, but in an elaborate structure of subfolders whose names will be constructed from the current date and time:
./20090822
./20090822/20090822-1324
./20090822/20090822-1324/20090822-132420-SojdH8AAQEAAAugAQAAAAAA
./20090822/20090822-1324/20090822-132420-SojdH8AAQEAAAugAQEAAAAA
The purpose of the scheme is to prevent too many files from being created within one directory; many filesystems have limits that can be relatively quickly reached on a busy web server. The first two parts in each filename are based on time (YYYYMMDD and HHMMSS). The third parameter is the unique transaction ID.
I hope I have a valid requirement and other vendor has given the configuration option already. Just wanted to know how do we handle this with syslog-ng
Hi @Homeshjoshi_twitter !
syslog-ng does not have the option to add a hook for finishing files.
We only log debug logs, when reaching EOF:
[2021-05-04T13:08:34.183298] End of file, following file; follow_filename='/tmp/alltilla1'
or after the user deletes the file:
[2021-05-04T13:08:34.480939] Monitored file is deleted; filename='/tmp/alltilla1'
[2021-05-04T13:08:34.480993] File status changed; EOF='1', DELETED='1', Filename='/tmp/alltilla1'
[2021-05-04T13:08:34.481007] Stop following file, because of deleted and eof; filename='/tmp/alltilla1'
[2021-05-04T13:08:34.481080] Closing log transport fd; fd='13'
[2021-05-04T13:08:34.481262] File is removed from the file list; Filename='/tmp/alltilla1'
As it was mentioned in the email thread you sent, you can play with syslog-ng-ctl and a cronjob, or a clever script, which checks if syslog-ng is alive, and sends it the logs for example in a pipe.
Something like exec-on-eof()
could be implemented, where you could form a shell command, e.g.: "rm -f ${FILENAME}", but I am not sure it is the right way to solve this.
I will make a PoC for it, and discuss it with other developers Thursday.
Cheers,
Attila
My log file
{"transaction":{"time":"03/May/2021:00:39:27 +0530","transaction_id":"YI745wh9F2ssHXPZeiHwjQAAAAQ","remote_address":"34.78.120.99","remote_port":41522,"local_address":"1.1.1.1","local_port":443},"request":{"request_line":"GET / HTTP/1.1","headers":{"Host":"1.1.1.1","User-Agent":"python-requests/2.25.1","Accept-Encoding":"gzip, deflate","Accept":"/","Connection":"keep-alive","x-datadog-trace-id":"17055120344034474476","x-datadog-parent-id":"11345974897864292098","x-datadog-sampling-priority":"0"}},"response":{"protocol":"HTTP/1.1","status":403,"headers":{"X-Content-Type-Options":"nosniff","Status":"403 Forbidden","Connection":"close","Content-Length":"1344","Content-Type":"text/html; charset=UTF-8"}},"audit_data":{"messages":["Warning. Match of \"ipMatch 127.0.0.1,::1\" against \"REMOTE_ADDR\" required. [file \"/usr/share/sw/rules/rule.conf\"] [line \"89\"] [id \"1234\"] [rev \"4\"] [msg \"Others\"] [severity \"NOTICE\"] [tag \"Suspicious activity detected - Host header is a numeric IP address\"]","Access denied with code 403 (phase 2). Pattern match \"python-requests/\" at REQUEST_HEADERS:User-Agent. [file \"/usr/share/sw/rules/20_swuseragents.conf\"] [line \"218\"] [id \"1234\"] [rev \"4\"] [msg \"Suspicious User-Agent\"] [severity \"CRITICAL\"] [tag \"Suspicious Unusual User Agent (python-requests). Disable this rule if you use python-requests/. \"]"],"error_messages":["[file \"apache2_util.c\"] [line 273] [level 3] [client 34.78.120.99] ModSecurity: Warning. Match of \"ipMatch 127.0.0.1,::1\" against \"REMOTE_ADDR\" required. [file \"/usr/share/sw/rules/rule.conf\"] [line \"89\"] [id \"1234\"] [rev \"4\"] [msg \"Others\"] [severity \"NOTICE\"] [tag \"Suspicious activity detected - Host header is a numeric IP address\"] [hostname \"1.1.1.1\"] [uri \"/\"] [unique_id \"YI745wh9F2ssHXPZeiHwjQAAAAQ\"]","[file \"apache2_util.c\"] [line 273] [level 3] [client 34.78.120.99] ModSecurity: Access denied with code 403 (phase 2). Pattern match \"python-requests/\" at REQUEST_HEADERS:User-Agent. [file \"/usr/share/sw/rules/20_swuseragents.conf\"] [line \"218\"] [id \"1234\"] [rev \"4\"] [msg \"Suspicious User-Agent\"] [severity \"CRITICAL\"] [tag \"Suspicious Unusual User Agent (python-requests). Disable this rule if you use python-requests/. \"] [hostname \"1.1.1.1\"] [uri \"/\"] [unique_id \"YI745wh9F2ssHXPZeiHwjQAAAAQ\"]"],"action":{"intercepted":true,"phase":2,"message":"Pattern match \"python-requests/\" at REQUEST_HEADERS:User-Agent."},"handler":"application/x-httpd-php","stopwatch":{"p1":523514,"p2":1612,"p3":0,"p4":0,"p5":155,"sr":460,"sw":96,"l":0,"gc":0},"producer":["ModSecurity for Apache/2.9.2 (http://www.modsecurity.org/)","201903261539"],"server":"Apache/2.4.29 (Ubuntu)","engine_mode":"ENABLED"}}
@version: 3.29
@include "scl.conf"
source s_apache {
wildcard-file(
base-dir("/var/log/apache2/50/ssl/20210503/20210503-0039")
filename-pattern("*")
recursive(yes)
follow-freq(1)
flags(no-parse)
log-fetch-limit(100) );
};
destination d_json {
file(
"/var/log/test.json"
template("$(format-json --scope nv_pairs --pair uniqueId=\"${json.transaction.transaction_id}\")\n")
);
};
parser p_json {
json-parser (prefix("json."));
};
log {
source(s_apache);
parser(p_json);
destination(d_json);
};
kv-parser
cannot do this unfortunately, as it does not support " " for value-separator, but a python
parser can make it work:python {
import re
class ErrorMessageParser(object):
def init(self, options):
self.prefix = options["prefix"] if "prefix" in options.keys() else ""
return True
def parse(self, msg):
i = 0
while True:
error_message = msg["json.audit_data.error_messages[{}]".format(i)]
if not error_message:
return True
kv_pairs = re.findall(b"\[[^ ]+ [^\]]+\]", error_message)
for kv_pair in kv_pairs:
delim_index = kv_pair.index(b" ")
key = kv_pair[1:delim_index].decode()
value = kv_pair[delim_index+2:-2]
msg["{}error_message[{}].{}".format(self.prefix, i, key)] = value
i += 1
};
parser p_json {
json-parser(prefix("json."));
python(class("ErrorMessageParser") options("prefix", "python."));
};
Although, I saw, that there are multiple key-values with the same key, like
file
, orline
, the python code should handle them somehow, in this current form the last will be used.
@alltilla Thanks a lot your code worked for me. I was trying with this and i got the error for value separator. @version: 3.29
@include "scl.conf"
source s_apache {
wildcard-file(
base-dir("/var/log/apache2/50/ssl/20210503/20210503-0039")
filename-pattern("*")
recursive(yes)
follow-freq(1)
flags(no-parse)
log-fetch-limit(100) );
};
destination d_json {
file(
"/var/log/test.json"
template("$(format-json --scope nv_pairs --pair uniqueId=\"${json.transaction.transaction_id}\")\n")
);
};
parser p_json {
json-parser (prefix("json."));
};
parser p_kv {
kv-parser (prefix("kv.") pair-separator("[]") value-separator(" ") );
template("${json.audit_data.error_messages[1]}");
};
log {
source(s_apache);
parser(p_json);
if (match(".*" value("json.audit_data.error_messages[1]")))
{
parser(p_kv);
};
destination(d_json);
};
Error parsing block argument, syntax error, unexpected end of file, expecting LL_BLOCK in /etc/syslog-ng/syslog-ng.conf:78:9-96:2:
73
74 destination d_elasticsearch_http {
75 elasticsearch-http(
76 index("fortigate-${YEAR}.${MONTH}.${DAY}")
77 type("fortigate")
78----> custom-id(\"${json.transaction.transaction_id}\")
78----> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
79----> url("http://10.139.196.225:9200/_bulk")
80----> template("$(format-json --scope rfc5424 --key geoip2 --scope dot-nv-pairs
81----> --rekey . --shift 1 --scope nv-pairs --exclude json.audit_data*
82----> --exclude DATE --key ISODATE @timestamp=${ISODATE})")
83----> );
Error parsing destination statement, syntax error, unexpected LL_ERROR, expecting '}' in /etc/syslog-ng/syslog-ng.conf:75:5-75:23:
70 );
71 };
72
73
74 destination d_elasticsearch_http {
75----> elasticsearch-http(
75----> ^^^^^^^^^^^^^^^^^^
76 index("fortigate-${YEAR}.${MONTH}.${DAY}")
77 type("fortigate")
78 custom-id(\"${json.transaction.transaction_id}\")
79 url("http://10.139.196.225:9200/_bulk")
80 template("$(format-json --scope rfc5424 --key geoip2* --scope dot-nv-pairs
16:02:39.337209 IP 130.216.191.116.59040 > 130.216.4.139.5514: Flags [P.], seq 0:437, ack 1, win 513, length 437
0x0000: 4500 01dd 251e 4000 7f06 0b4d 82d8 bf74 E...%.@....M...t
0x0010: 82d8 048b e6a0 158a 07aa 6dac 94be 7ade ..........m...z.
0x0020: 5018 0201 0c29 0000 3c31 343e 3120 3230 P....)..<14>1.20
0x0030: 3231 2d30 352d 3036 5430 343a 3032 3a33 21-05-06T04:02:3
0x0040: 382e 3939 345a 206e 6f64 6170 7070 7264 8.994Z.nodappprd
0x0050: 3031 2045 5241 5365 7276 6572 2033 3837 01.ERAServer.387
0x0060: 3220 2d20 2d20 efbb bf7b 2265 7665 6e74 2.-.-....{"event
0x0070: 5f74 7970 6522 3a22 4175 6469 745f 4576 _type":"Audit_Ev
0x0080: 656e 7422 2c22 6970 7634 223a 2231 3330 ent","ipv4":"130
0x0090: 2e32 3136 2e31 3931 2e31 3136 222c 2268 .216.191.116","h
0x00a0: 6f73 746e 616d 6522 3a22 6e6f 6461 7070 ostname":"nodapp
0x00b0: 7072 6430 3122 2c22 736f 7572 6365 5f75 prd01","source_u
0x00c0: 7569 6422 3a22 6466 3130 6366 3965 2d66 uid":"df10cf9e-f
0x00d0: 6233 642d 3461 3634 2d61 3031 362d 3266 b3d-4a64-a016-2f
0x00e0: 6561 6465 3537 3238 3661 222c 226f 6363 eade57286a","occ
0x00f0: 7572 6564 223a 2230 362d 4d61 792d 3230 ured":"06-May-20
0x0100: 3231 2030 343a 3032 3a33 3822 2c22 7365 21.04:02:38","se
0x0110: 7665 7269 7479 223a 2249 6e66 6f72 6d61 verity":"Informa
0x0120: 7469 6f6e 222c 2264 6f6d 6169 6e22 3a22 tion","domain":"
0x0130: 446f 6d61 696e 2067 726f 7570 222c 2261 Domain.group","a
0x0140: 6374 696f 6e22 3a22 4c6f 676f 7574 222c ction":"Logout",
0x0150: 2274 6172 6765 7422 3a22 6561 6230 3233 "target":"eab023
0x0160: 3161 2d37 6364 392d 3437 6334 2d61 6637 1a-7cd9-47c4-af7
0x0170: 332d 6162 3836 3765 6263 6230 3763 222c 3-ab867ebcb07c",
0x0180: 2264 6574 6169 6c22 3a22 4c6f 6767 696e "detail":"Loggin
0x0190: 6720 6f75 7420 646f 6d61 696e 2075 7365 g.out.domain.use
0x01a0: 7220 2775 6f61 5c5c 6767 6c65 3030 3427 r.'uoa\\ggle004'
0x01b0: 2e22 2c22 7573 6572 223a 2275 6f61 5c5c .","user":"uoa\\
0x01c0: 6767 6c65 3030 3422 2c22 7265 7375 6c74 ggle004","result
0x01d0: 223a 2253 7563 6365 7373 227d 0a ":"Success"}.
source s_eset {
syslog( transport("tcp") port(5514) keep-alive(yes));
};
1620341573 2021 May 7 10:52:53 +12:00 nodappprd01.uoa.auckland.ac.nz: 1: 2021-05-06T22:52:53.626Z nodappprd01 ERAServer 3872 - - {"event_type":"Audit_Event","ipv4":"130.216.191.116","hostname":"nodappprd01","source_uuid":"df10cf9e-fb3d-4a64-a016-2feade57286a","occured":"06-May-2021 22:52:53","severity":"Information","domain":"Domain group","action":"Login attempt","target":"3c4cd80b-39a3-4f56-9d85-f971197e5b46","detail":"Authenticating domain user 'uoa\\rful011'.","user":"","result":"Success"}
Another query: I use the geoip parser but sometimes the IP is missing and this generates an error in the logs. Is there a way to make the parse conditional on there being something to process?
You can do something like this:
log {
source(s_src);
if ( "${YOUR_IP_MACRO}" ne "" ) {
parser(p_geoip);
};
destination(d_dest);
};
@alltilla currently I am getting the output as below
"python": {
"error_message[1]": {
"uri": "/",
"unique_id": "YI745wh9F2ssHXPZeiHwjQAAAAQ",
"tag": "Suspicious Unusual User Agent (python-requests). Disable this rule if you use python-requests/. ",
"severity": "CRITICAL",
"rev": "4",
"msg": "Suspicious User-Agent",
"line": "218",
"level": "",
"id": "332039",
"hostname": “1.1.1.1”,
"file": "/usr/share/sw/rules/20_sw_useragents.conf",
"client": "4.78.120.9"
},
"error_message[0]": {
"uri": "/",
"unique_id": "YI745wh9F2ssHXPZeiHwjQAAAAQ",
"tag": "Suspicious activity detected - Host header is a numeric IP address",
"severity": "NOTICE",
"rev": "4",
"msg": "Others",
"line": "89",
"level": "",
"id": "331032",
"hostname": “1.1.1.1”,
"file": "/usr/share/sw/rules/00_sw_zz_strict.conf",
"client": "4.78.120.9"
}
},
can you make it as an array of objects
"python":
"error_message": [
{
"uri": "/",
"unique_id": "YI745wh9F2ssHXPZeiHwjQAAAAQ",
"tag": "Suspicious Unusual User Agent (python-requests). Disable this rule if you use python-requests/. ",
"severity": "CRITICAL",
"rev": "4",
"msg": "Suspicious User-Agent",
"line": "218",
"level": "",
"id": "332039",
"hostname": “1.1.1.1”,
"file": "/usr/share/sw/rules/20_sw_useragents.conf",
"client": "4.78.120.9"
},
{
"uri": "/",
"unique_id": "YI745wh9F2ssHXPZeiHwjQAAAAQ",
"tag": "Suspicious activity detected - Host header is a numeric IP address",
"severity": "NOTICE",
"rev": "4",
"msg": "Others",
"line": "89",
"level": "",
"id": "331032",
"hostname": “1.1.1.1”,
"file": "/usr/share/sw/rules/00_sw_zz_strict.conf",
"client": "4.78.120.9"
}
]
May 8 16:40:08 secmgrprd02 syslog-ng[3682]: json-parser(): failed to extract JSON members into name-value pairs. The parsed/extracted JSON payload was not an object; input='2021-05-08T04:40:08.374Z nodappprd01 ERAServer 3568 - - {"event_type":"FilteredWebsites_Event","ipv4":"172.24.45.167","hostname":"md378033.uoa.auckland.ac.nz","source_uuid":"a2ac336a-61a8-4a7d-8423-82756293da47","occured":"08-May-2021 04:24:26","severity":"Warning","event":"An attempt to connect to URL","target_address":"127.0.0.1","target_address_type":"IPv4","scanner_id":"HTTP filter","action_taken":"blocked","object_uri":"localhost.auckland.ac.nz","hash":"3A973AEF21BDDD57A32468471EFB577E15CDEB53","username":"UOA\\cnim002","processname":"C:\\Users\\cnim002\\AppData\\Local\\Mozilla Firefox\\firefox.exe","rule_id":"Website certificate revoked"}', extract_prefix='(null)'
Thanks Yash, your post prompted me to RTFM (again ;) and I see that it is supposed to parse the MESSAGE by default. So I am now doubly puzzled why this is failing.
hmmm... looking at the pcap (full message in previous message) we see0x0060: 3220 2d20 2d20 efbb bf7b 2265 7665 6e74 2.-.-....{"event
I can't figure out what the non ascii efbbbf
(less renders this as <U+FEFF>
) is about but I now suspect that that is the cause of the problem. This is being sent by the eset application??? The is an option for length framing but I have that disabled. This does not look like a length!
I am now using a program destination with a template the includes only the MESSAGE macro. It gets the whole record!
My guess is that this chunk of non ascii at the start of the message is breaking syslog-ng's parsing of the message.
Presumably this is a bug in eset. Does anyone use syslog from eset?