I had full queue issue:
[SplunkHandler] Log queue full; log data will be dropped.
so I investigate the reason because I didn't find anything excessive from my logging bandwidth
specifically when I read the comment in the code:
# without looking at each item, estimate how many can fit in 50 MB
but empty_queue function is only processing at maximum 0.5MB every 10s max from the queue, so it is building up continuously reaching maximum default 5000 elements, then dropping.
apprx_size_base is number of characters in Bytes generally, and 524288 is not 50MB but only 0.5MB
count = min(int(524288 / apprx_size_base), len(self.queue))
so the correction is
count = min(int(52428800 / apprx_size_base), len(self.queue))
to be inline with the comment above, and assure a reasonable average bandwidth of 50MB/10s , but not 0.5MB/10s -> 50k/s of logs.
|
count = min(max(int(524288 / apprx_size_base), 1), len(self.queue)) |
I had full queue issue:
[SplunkHandler] Log queue full; log data will be dropped.
so I investigate the reason because I didn't find anything excessive from my logging bandwidth
specifically when I read the comment in the code:
# without looking at each item, estimate how many can fit in 50 MB
but empty_queue function is only processing at maximum 0.5MB every 10s max from the queue, so it is building up continuously reaching maximum default 5000 elements, then dropping.
apprx_size_base is number of characters in Bytes generally, and 524288 is not 50MB but only 0.5MB
count = min(int(524288 / apprx_size_base), len(self.queue))
so the correction is
count = min(int(52428800 / apprx_size_base), len(self.queue))
to be inline with the comment above, and assure a reasonable average bandwidth of 50MB/10s , but not 0.5MB/10s -> 50k/s of logs.
splunk_handler/splunk_handler/__init__.py
Line 314 in ebb4f5f