You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Open this to the logstash github, but later found that the kafka-plugin have their own issue tracking, so moving the issue there:
Describe the enhancement:
Right now logstash kafka input reads from all kafka nodes and partitions, even if in a different zone.
With a new option, kafka input should look to it own zone and find kafkas in the same zone and use then, only fallback to the other zone kafkas if the first one is unavailable/inexistent or if the local kafka partitions are already fully processed, jumping to unconsumed remote zone partitions
This will save costs, having the zone logstash talking to their own zone kafka node will reduce cross-zone traffic. Kafka will rebalance their partitions and replicas between zones and filebeat could use a similar feature to help reduce cross-zone traffic costs
Open this to the logstash github, but later found that the kafka-plugin have their own issue tracking, so moving the issue there:
Describe the enhancement:
Right now logstash kafka input reads from all kafka nodes and partitions, even if in a different zone.
With a new option, kafka input should look to it own zone and find kafkas in the same zone and use then, only fallback to the other zone kafkas if the first one is unavailable/inexistent or if the local kafka partitions are already fully processed, jumping to unconsumed remote zone partitions
This will save costs, having the zone logstash talking to their own zone kafka node will reduce cross-zone traffic. Kafka will rebalance their partitions and replicas between zones and filebeat could use a similar feature to help reduce cross-zone traffic costs
Check this example:
https://amplitude.engineering/reducing-costs-with-az-awareness-efc92bc7113a
The text was updated successfully, but these errors were encountered: