Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Not able to use dedicated servers FSN1 #1613

Open
bartvanvliet opened this issue Jan 10, 2025 · 2 comments
Open

[Bug]: Not able to use dedicated servers FSN1 #1613

bartvanvliet opened this issue Jan 10, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@bartvanvliet
Copy link

Description

CCX13 servers unavailable in FSN1 despite UI showing availability

CCX13 servers in FSN1 location are shown as available in Hetzner Cloud UI (see screenshot), but when attempting to deploy via kube-hetzner, the API returns resource unavailability errors. Deployment works successfully when using HEL1 location instead.

Config that fails:

control_plane_nodepools = [
  {
    name        = "elastic-control-plane-fsn1",
    server_type = "ccx13",
    location    = "fsn1",
    count       = 3
  }
]

Error received:

Error: we are unable to provision servers for this location, try with a different location or try later (resource_unavailable)

Working solution:
Change location to hel1 for nodepools and load balancer - confirmed working with same CCX13 server type.

Kube.tf file

locals {
  hcloud_token = "REDACTED_API_TOKEN"
}

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token

  source = "kube-hetzner/kube-hetzner/hcloud"

  ssh_public_key = file("~/.ssh/id_ed25519.pub")
  ssh_private_key = file("~/.ssh/id_ed25519")

  network_region = "eu-central"

  control_plane_nodepools = [
    {
      name        = "elastic-control-plane-fsn1",
      server_type = "ccx13",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 3
    },
  ]

  agent_nodepools = [
    {
      name        = "fsn1-elastic-agent-large",
      server_type = "ccx13",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 3
    },
  ]

  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"

  # firewall_kube_api_source = ["REDACTED_IP_RANGE/24"]

  dns_servers = [
    "1.1.1.1",
    "8.8.8.8", 
    "2606:4700:4700::1111",
  ]

  use_control_plane_lb = true
}

provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.49.1"
    }
  }
}

output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}

variable "hcloud_token" {
  sensitive = true
  default   = ""
}

Screenshots

CleanShot 2025-01-10 at 16 20 16@2x

Platform

Mac

@bartvanvliet bartvanvliet added the bug Something isn't working label Jan 10, 2025
@air3ijai
Copy link

Which result do you get via hcloud?

# Authenticate
export HCLOUD_TOKEN="<API Token>"

# List Datacenters
hcloud datacenter list

# List Server Types
hcloud server-type list

# List images
hcloud image list

# Create a server
hcloud server create --datacenter fsn1-dc14 --type ccx33 --image ubuntu-24.04 --name fsn1-ccx33-ubuntu

Just did an experiment and it shows that availability shown on the Hetzner is not necessary reflect instances availability.

Server type ccx23 and ccx33 is not available, but I was able to run servers, but not ccx43 and got an error

hcloud: error during placement (resource_unavailable, 8cf3b34cd58bf6ab)

Image

@air3ijai
Copy link

And we also can use hcloud_server resource to run a server

resource "hcloud_server" "ubuntu" {
  name        = "fsn1-ccx43-ubuntu"
  datacenter  = "fsn1-dc14"
  image       = "ubuntu-24.04"
  server_type = "ccx43"
}

terraform {
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
    }
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants