Skip to content

input: add configurable retry limit for threaded input ring buffer #11393

@jinyongchoi

Description

@jinyongchoi

Is your feature request related to a problem? Please describe.

When using threaded input plugins (e.g., in_tail with Threaded true), the ring buffer retry limit is hardcoded to 10 attempts with 100ms sleep between retries (1 second total).

If the ring buffer remains full after these retries, data is dropped with only an error log message. In high-throughput environments with temporary backpressure, this hardcoded limit causes unacceptable data loss that cannot be recovered.

Currently, users have no way to adjust this behavior to match their specific requirements.

Relevant code: flb_input_chunk.c#L2995-L2999

Describe the solution you'd like

Add a new configuration option thread.ring_buffer.retry_limit that allows users to configure the maximum number of retry attempts before dropping data.

Example usage:

[INPUT]
    Name tail
    Path /var/log/*.log
    Threaded true
    thread.ring_buffer.retry_limit 1000
  • Default value: 10 (maintains backward compatibility)
  • The option should be available for all threaded input plugins

Describe alternatives you've considered

  1. Increase ring buffer size (thread.ring_buffer option) - Helps but doesn't solve the fundamental issue when backpressure is sustained. Memory consumption increases significantly.

  2. Use filesystem buffering (storage.type filesystem) - Adds disk I/O overhead. Not suitable for all use cases (e.g., ephemeral containers).

  3. Reduce input rate - Not always possible in production environments.

Additional context

We encountered this issue when using in_tail with out_kafka configured with gzip compression. During high-throughput periods, gzip compression became a bottleneck, causing the output to slow down. This backpressure propagated to the input's ring buffer, which filled up and started dropping data after just 1 second of retries.

[INPUT]
    Name tail
    Path /var/log/app/*.log
    Threaded true

[OUTPUT]
    Name kafka
    Brokers kafka:9092
    Topics logs
    compression.codec gzip
[2026/01/23 20:44:51.320209474] [error] [input:tail:input_log] could not enqueue records into the ring buffer

Use cases affected:

  • High-volume log collection (>100k events/sec)
  • Output plugins with CPU-intensive operations (compression, serialization)
  • Environments with intermittent network issues
  • Systems where data loss is unacceptable (audit logs, financial transactions)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions