Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: use_control_plane_lb Error: IP not available #1608

Open
JWDobken opened this issue Jan 8, 2025 · 0 comments
Open

[Bug]: use_control_plane_lb Error: IP not available #1608

JWDobken opened this issue Jan 8, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@JWDobken
Copy link

JWDobken commented Jan 8, 2025

Description

I have used my Kubernetes cluster setup with this provider for 6 months now with 3 control planes but with use_control_plane_lb=false and I want to set it to true needing that kind of HA to access the Kube API..

control_plane_nodepools = [
  {
    name        = "control-plane-fsn1",
    server_type = "cpx31",
    location    = "fsn1",
    labels      = [],
    taints      = [],
    count       = 3
  },
]

use_control_plane_lb=true

Terraform plan:


Terraform will perform the following actions:

  # module.kube-hetzner.hcloud_load_balancer.control_plane[0] will be created
  + resource "hcloud_load_balancer" "control_plane" {
      + delete_protection  = false
      + id                 = (known after apply)
      + ipv4               = (known after apply)
      + ipv6               = (known after apply)
      + labels             = {
          + "cluster"     = "***-dev"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "control_plane_lb"
        }
      + load_balancer_type = "lb11"
      + location           = "fsn1"
      + name               = "***-dev-control-plane"
      + network_id         = (known after apply)
      + network_ip         = (known after apply)
      + network_zone       = (known after apply)

      + algorithm (known after apply)

      + target (known after apply)
    }

  # module.kube-hetzner.hcloud_load_balancer_network.control_plane[0] will be created
  + resource "hcloud_load_balancer_network" "control_plane" {
      + enable_public_interface = true
      + id                      = (known after apply)
      + ip                      = "10.255.0.1"
      + load_balancer_id        = (known after apply)
      + subnet_id               = "*****-10.255.0.0/16"
    }

  # module.kube-hetzner.hcloud_load_balancer_service.control_plane[0] will be created
  + resource "hcloud_load_balancer_service" "control_plane" {
      + destination_port = 6443
      + id               = (known after apply)
      + listen_port      = 6443
      + load_balancer_id = (known after apply)
      + protocol         = "tcp"
      + proxyprotocol    = (known after apply)

      + health_check (known after apply)

      + http (known after apply)
    }

  # module.kube-hetzner.hcloud_load_balancer_target.control_plane[0] will be created
  + resource "hcloud_load_balancer_target" "control_plane" {
      + id               = (known after apply)
      + label_selector   = "cluster=***-dev,engine=k3s,provisioner=terraform,role=control_plane_node"
      + load_balancer_id = (known after apply)
      + type             = "label_selector"
      + use_private_ip   = true
    }

Gives me:

module.kube-hetzner.hcloud_load_balancer_service.control_plane[0]: Creation complete after 0s [id=2260954__6443]
╷
│ Error: IP not available (ip_not_available, 2ece748d2c871013594fbb070d04a8b8)
│ 
│   with module.kube-hetzner.hcloud_load_balancer_network.control_plane[0],
│   on .terraform/modules/kube-hetzner/control_planes.tf line 57, in resource "hcloud_load_balancer_network" "control_plane":
│   57: resource "hcloud_load_balancer_network" "control_plane" {
│ 
image

Kube.tf file

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token

  source = "kube-hetzner/kube-hetzner/hcloud"
  ssh_public_key = file("~/.ssh/default.pub")
  ssh_private_key = null
  network_region = "eu-central"

  control_plane_nodepools = var.control_plane_nodepools

  agent_nodepools = var.agent_nodepools

  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"
  enable_csi_driver_smb = true
  ingress_controller = "nginx"
  allow_scheduling_on_control_plane = var.allow_scheduling_on_control_plane
  system_upgrade_use_drain = true
  cluster_name = var.cluster_name
  firewall_ssh_source = var.firewall_ssh_source
  extra_firewall_rules = [
    {
      description     = "SMB Protocol IN"
      direction       = "in"
      protocol        = "tcp"
      port            = "445"
      source_ips      = ["0.0.0.0/0", "::/0"]
      destination_ips = [] # Won't be used for this rule
    },
    {
      description     = "SMB Protocol OUT"
      direction       = "out"
      protocol        = "tcp"
      port            = "445"
      source_ips      = [] # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    },
    {
      description     = "SMTP Protocol OUT (Google SMTP)"
      direction       = "out"
      protocol        = "tcp"
      port            = "587" # Google SMTP (TLS/STARTTLS)
      source_ips      = []    # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    },
    {
      description     = "APIFY PROXY IN"
      direction       = "in"
      protocol        = "tcp"
      port            = "8000"
      source_ips      = ["0.0.0.0/0", "::/0"]
      destination_ips = [] # Won't be used for this rule
    },
    {
      description     = "APIFY PROXY OUT"
      direction       = "out"
      protocol        = "tcp"
      port            = "8000"
      source_ips      = [] # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    },
    {
      description     = "Teleport 3023 researchable IN"
      direction       = "in"
      protocol        = "tcp"
      port            = "3023"
      source_ips      = ["0.0.0.0/0", "::/0"]
      destination_ips = [] # Won't be used for this rule
    },
    {
      description     = "Teleport 3023 researchable OUT"
      direction       = "out"
      protocol        = "tcp"
      port            = "3023"
      source_ips      = [] # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    },
    {
      description     = "Teleport 3024 researchable IN"
      direction       = "in"
      protocol        = "tcp"
      port            = "3024"
      source_ips      = ["0.0.0.0/0", "::/0"]
      destination_ips = [] # Won't be used for this rule
    },
    {
      description     = "Teleport 3024 researchable OUT"
      direction       = "out"
      protocol        = "tcp"
      port            = "3024"
      source_ips      = [] # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    },
    {
      description     = "Teleport 3026 researchable IN"
      direction       = "in"
      protocol        = "tcp"
      port            = "3026"
      source_ips      = ["0.0.0.0/0", "::/0"]
      destination_ips = [] # Won't be used for this rule
    },
    {
      description     = "Teleport 3026 researchable OUT"
      direction       = "out"
      protocol        = "tcp"
      port            = "3026"
      source_ips      = [] # Won't be used for this rule
      destination_ips = ["0.0.0.0/0", "::/0"]
    }
  ]

  enable_cert_manager = true
  dns_servers = [
    "1.1.1.1",
    "8.8.8.8",
    "2606:4700:4700::1111",
  ]
  use_control_plane_lb = var.use_control_plane_lb
  lb_hostname = var.lb_host_name
}

provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.49.1"
    }
  }
}

output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}

variable "hcloud_token" {
  sensitive = true
  default   = ""
}

Screenshots

A second load balancer is created but has no Private IP and no targets

image

Platform

Mac

@JWDobken JWDobken added the bug Something isn't working label Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant