Skip to content

Commit

Permalink
Add KubeVirt instructions to tilt.md
Browse files Browse the repository at this point in the history
Signed-off-by: Johanan Liebermann <[email protected]>
  • Loading branch information
johananl committed Jan 17, 2025
1 parent 2f5d70a commit 776ca83
Showing 1 changed file with 167 additions and 10 deletions.
177 changes: 167 additions & 10 deletions docs/book/src/developer/core/tilt.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,154 @@ workflow that offers easy deployments and rapid iterative builds.
## Prerequisites

1. [Docker](https://docs.docker.com/install/): v19.03 or newer (on MacOS e.g. via [Lima](https://github.com/lima-vm/lima))
2. [kind](https://kind.sigs.k8s.io): v0.25.0 or newer
3. [Tilt](https://docs.tilt.dev/install.html): v0.30.8 or newer
4. [kustomize](https://github.com/kubernetes-sigs/kustomize): provided via `make kustomize`
5. [envsubst](https://github.com/drone/envsubst): provided via `make envsubst`
6. [helm](https://github.com/helm/helm): v3.7.1 or newer
7. Clone the [Cluster API](https://github.com/kubernetes-sigs/cluster-api) repository
1. [kind](https://kind.sigs.k8s.io): v0.25.0 or newer
1. [Tilt](https://docs.tilt.dev/install.html): v0.30.8 or newer
1. [kustomize](https://github.com/kubernetes-sigs/kustomize): provided via `make kustomize`
1. [envsubst](https://github.com/drone/envsubst): provided via `make envsubst`
1. [helm](https://github.com/helm/helm): v3.7.1 or newer
1. [virtctl](https://kubevirt.io/user-guide/user_workloads/virtctl_client_tool/): v1.4.0 or newer (required for KubeVirt only)
1. [ctlptl](https://github.com/tilt-dev/ctlptl): v0.8.37 or newer (required for KubeVirt only)
1. Clone the [Cluster API](https://github.com/kubernetes-sigs/cluster-api) repository
locally
8. Clone the provider(s) you want to deploy locally as well
1. Clone the provider(s) you want to deploy locally as well

## Getting started

### Create a kind cluster
A script to create a KIND cluster along with a local Docker registry and the correct mounts to run CAPD is included in the hack/ folder.

The following CAPI infrastructure providers are suitable for local development:

- [CAPD](https://github.com/kubernetes-sigs/cluster-api/blob/main/test/infrastructure/docker/README.md) - uses Docker containers as workload cluster nodes
- [CAPK](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt) - uses KubeVirt VMs as workload cluster nodes

CAPD is the default as it's more lightweight and requires less setup. KubeVirt is useful when
Docker isn't suitable for whatever reason.

{{#tabs name:"tab-management-cluster-creation" tabs:"Docker,KubeVirt"}}
{{#tab Docker}}

A script to create a kind cluster along with a local Docker registry and the correct mounts to run CAPD is included in the hack/ folder.

To create a pre-configured cluster run:

```bash
./hack/kind-install-for-capd.sh
```

{{#/tab }}
{{#tab KubeVirt}}

Create a ctlptl configuration for a kind cluster with a local container registry:

```bash
# Docker Hub credentials are required to avoid rate limiting
export DOCKER_CONFIG_FILE="$HOME/.docker/config.json"

cat <<EOF > kind-kubevirt.yaml
apiVersion: ctlptl.dev/v1alpha1
kind: Registry
name: ctlptl-registry
port: 5000
---
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
kindV1Alpha4Cluster:
name: capi-test
nodes:
- role: control-plane
extraMounts:
- containerPath: /var/lib/kubelet/config.json
hostPath: $DOCKER_CONFIG_FILE
networking:
disableDefaultCNI: true
EOF
```

<aside class="note">

The default kind CNI doesn't work with KubeVirt. Therefore, we disable it in the config above and
use Calico instead (see below).

</aside>

Create the kind cluster and local registry:

```bash
ctlptl apply -f kind-kubevirt.yaml
```

Deploy Calico as the CNI:

```bash
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/calico.yaml
```

<aside class="note">

We need to be able to expose the API servers of workload clusters using k8s services of type
`LoadBalancer`. On cloud environments these services are usually handled by a cloud controller
manager which creates cloud load balancers and maps them to the workload cluster nodes. In this
guide we use MetalLB, a software-based load balancer, for the same purpose. There may be
alternative solutions which might work as well.

</aside>

Install MetalLB as a load balancing solution:

```bash
METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
```

Apply a custom MetalLB configuration to enable exposing k8s services using ARP:

```bash
GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: capi-ip-pool
namespace: metallb-system
spec:
addresses:
- 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF
```

<aside class="note">

There is a difference between the CAPI KubeVirt infrastructure provider (CAPK) and KubeVirt itself.
KubeVirt can be used on a k8s cluster regardless of CAPI.

KubeVirt needs to be deployed to the kind cluster before CAPK can work since CAPK instructs
KubeVirt to create and destroy VMs.

</aside>

Deploy KubeVirt to the kind cluster:

```bash
KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
```

{{#/tab }}
{{#/tabs }}

You can see the status of the cluster with:

```bash
Expand All @@ -36,7 +164,10 @@ kubectl cluster-info --context kind-capi-test

### Create a tilt-settings file

Next, create a `tilt-settings.yaml` file and place it in your local copy of `cluster-api`. Here is an example that uses the components from the CAPI repo:
Next, create a `tilt-settings.yaml` file and place it in your local copy of `cluster-api`.

{{#tabs name:"tab-tilt-settings" tabs:"Docker,KubeVirt"}}
{{#tab Docker}}

```yaml
default_registry: gcr.io/your-project-name-here
Expand All @@ -46,7 +177,33 @@ enable_providers:
- kubeadm-control-plane
```
To use tilt to launch a provider with its own repo, using Cluster API Provider AWS here, `tilt-settings.yaml` should look like:
{{#/tab }}
{{#tab KubeVirt}}
```yaml
enable_providers:
- kubevirt
- kubeadm-bootstrap
- kubeadm-control-plane
provider_repos:
# Path to a local clone of CAPK (replace with actual path)
- ../cluster-api-provider-kubevirt
kustomize_substitutions:
# CAPK needs access to the containerd socket (replace with actual path)
CRI_PATH: "/var/run/containerd/containerd.sock"
# An example - replace with an appropriate container disk image for the desired k8s version
NODE_VM_IMAGE_TEMPLATE: "quay.io/capk/ubuntu-2204-container-disk:v1.30.1"
KUBERNETES_VERSION: "v1.30.1"
# Allow deploying CAPK workload clusters from the Tilt UI
template_dirs:
kubevirt:
- ../cluster-api-provider-kubevirt/templates
```
{{#/tab }}
{{#/tabs }}
Other infrastructure providers may be added to the cluster using local clones and a configuration similar to the following:
```yaml
default_registry: gcr.io/your-project-name-here
Expand Down

0 comments on commit 776ca83

Please sign in to comment.