- Setup kubernetes cluster (OCI)
- 1. Create OCI api key
- 2. Add that API key to the OCI console
- 3. Collect OCI info
- 4. Add ssh public key to OCI VMs
- 5. Go to subnet of VMs, add option to security list to allow communication between VMs
- 6. Attach Shared Volume to VMs
- 7. Goto k8s folder
- 8. Update terraform.tfvars with OCI info, VMs info
- 9. Apply k8s
- 10. [Optional] Destroy k8s
- Access the kubernetes cluster
- 1. Add ingress rules to security list of controller VM to allow kubectl to access the cluster
- 2. Allow ports in firewall of controller VM, fix ethtool cilium_vxlan, iptables
- 3. Get the kubeconfig file
~/.kube/oci_config
- 4. Tunnel to a node that's control plane
- 5. Now you can use
kubectl
to access the cluster
Prerequisites:
openssl
installed- list of VMs in OCI already created
- list of Network Load Balancers (NLB) in OCI already created and their backend sets point to the VMs
openssl genrsa -out oci-api-key.pem 2048
chmod 600 oci-api-key.pem
openssl rsa -pubout -in oci-api-key.pem -out oci-api-key.pub.pem
Copy to ~/.oci/config
Replace key_file=<path to your private keyfile> # TODO
with the path to your private keyfile oci-api-key.pem
# ssh to VMs, copy public key to authorized_keys
echo "some-ssh-public-key" >> ~/.ssh/authorized_keys
# check if key is added
cat ~/.ssh/authorized_keys
- Create shared block volume in OCI
- Attach the volume to VMs
- Follow the instruction https://blogs.oracle.com/cloud-infrastructure/post/using-the-multiple-instance-attach-block-volume-feature-to-create-a-shared-file-system-on-oracle-cloud-infrastructure
cd k8s
Get values from OCI console and ~/.oci/config
ssh_private_key
is the private key of public key added to VMs in step 4. Add ssh public key to OCI VMs
# terraform.tfvars
ocis = [
{
name = "name"
user = "ocid-of-user"
fingerprint = "fingerprint-of-oci"
tenancy = "ocid-of-tenancy"
region = "ap-singapore-1"
api_key_path = "/path/to/oci-api-key.pem"
api_pub_key_path = "/path/to/oci-api-key.pub.pem"
instances = [
{
id = "ocid1"
name = "node-1"
is_control_plane = true
},
{
id = "ocid2"
name = "node-2"
is_control_plane = false
}
],
nlbs = [{
id = "ocid-of-nlb"
name = "nlb-1"
}]
}
]
ssh_private_key = "/path/to/ssh-private-key" # default is ~/.ssh/id_rsa
registry_htpasswd = "your-password-for-registry"
# inside k8s/ folder
terraform init -reconfigure -upgrade
terraform apply
# inside k8s/ folder
terraform destroy
# oracle linux 8, ref: https://linuxconfig.org/redhat-8-open-and-close-ports
sudo firewall-cmd --permanent --zone=public --add-service=http --add-service=https
sudo firewall-cmd --permanent --zone=public \
--add-port 80/tcp \
--add-port 443/tcp \
--add-port 51820/udp \
--add-port 6443/tcp \
--add-port 2379-2380/tcp \
--add-port 9254/tcp \
--add-port 10250/tcp \
--add-port 10256/tcp \
--add-port 10257/tcp \
--add-port 10259/tcp \
--add-port 30000-32767/tcp
sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/8
sudo firewall-cmd --reload
sudo firewall-cmd --zone=public --list-all # check
sudo firewall-cmd --zone=trusted --list-all # check
# fix ethtool cilium_vxlan
sudo ethtool --offload cilium_vxlan tx-checksum-ip-generic off
# fix iptables
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
Use rsync
to a VM (node) that's control plane to copy the file
rsync -chavzP --stats --rsync-path="sudo rsync" opc@control-plane-node-public-ip:/etc/kubernetes/admin.conf ~/.kube/oci_config
ssh -L6443:localhost:6443 opc@control-plane-node-public-ip
export KUBECONFIG=~/.kube/oci_config
kubectl get nodes
kubectl get pods --all-namespaces