Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dead Letter Queue is never cleared after processing. #20

Open
kreiger opened this issue Feb 13, 2019 · 4 comments
Open

Dead Letter Queue is never cleared after processing. #20

kreiger opened this issue Feb 13, 2019 · 4 comments

Comments

@kreiger
Copy link

kreiger commented Feb 13, 2019

  • Steps to Reproduce:
  1. Logstash fails to write events to ElasticSearch for some reason. In our case, a mapping error.
  2. Logstash writes the events to DLQ
  3. Second pipeline reads DLQ using logstash-input-dead_letter_queue and writes them somewhere else, in our case Kafka.

Expected behavior:

DLQ is cleared after processing.

Actual behavior:

DLQ is not cleared after processing.
DLQ becomes full.
Logs are spammed with warnings about DLQ being full.
Log events are lost once DLQ is full.

@pdscopes
Copy link

I have the same issue using the DLQ input. I think that there should be a configuration option to enable/disable the clearing of the DLQ after processing which defaults to false.

@naimmalek
Copy link

naimmalek commented Aug 5, 2020

Having same issue. Agree with @pdscopes there should be an option in configuration to clear event from DLQ.

@Cr3idhne
Copy link

Cr3idhne commented Aug 7, 2020

Will this be available soon in an future release? Facing the same issues, would be great to have the functionality to preform this as mentioned by @pdscopes . Seen multiple elastic.discussion topics on this also.

@willemveerman
Copy link

I've got around this in kubernetes by configuring an output plugin in logstash which kills the container if the DLQ exceeds 80% of its pre-configured size.

The DLQ itself is at a path within the container, so when the pod is recycled it is cleared completely - as is logstash's internal DLQ count.

The sincedb files which track the log file reading position are in a dir bind-mounted from the host, so they survive the pod recycle

    input {
      dead_letter_queue {
        id => "kubernetes_dlq"
        path => "${PATH_DEAD_LETTER_QUEUE}"
        sincedb_path => {{ .Values.logstash.dead_letter_queue_sincedb_path | quote }}
        commit_offsets => true
        tags => ["dlq"]
      }
    }

    # If the DLQ is > 80% full kill the pod
    output {
      if "dlq" in [tags] {
        exec {
          command => "if (( $(du -sb {{ .Values.logstash.path_dead_letter_queue }}/main/ | cut -f1) > $(( {{ .Values.logstash.dead_letter_queue_max_bytes }}*80/100 )) )); then kill 1; fi &"
        }
      }
    }

Also worth pointing out:

  1. The DLQ checking feature works by sending a kill to the logstash PID - this command sends a SIGTERM by default: https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
  2. A SIGTERM causes logstash to gracefully stop, completing in-process events before shutting down: https://discuss.elastic.co/t/gracefully-shutdown-logstash/40132

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants