- Linux or MacOS (Windows isn't supported at the moment)
- A set of AWS credentials sufficient to bootstrap the cluster (see bootstrapping-aws-identity-and-access-management-with-cloudformation).
- An AWS IAM role to give to the Cluster API control plane.
- Minikube version v0.30.0 or later
- kubectl
- kustomize
- make
- gettext (with
envsubst
in your PATH) - bazel
Get the latest release of clusterctl
and clusterawsadm
and place it in your path. If a release isn't available, or you might prefer to build the latest version from master you can use go get sigs.k8s.io/cluster-api-provider-aws/...
– the trailing ...
will ask for both clusterctl
and clusterawsadm
to be built.
Before launching clusterctl, you need to define a few environment variables (AWS_REGION
, AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
). You thus need an AWS user with sufficient permissions:
- You can create that user and assign the permissions manually.
- Or you can use the
clusterawsadm
tool.
clusterawsadm
, is a helper utlity that users might choose to use to quickly setup prerequisites. It can be installed as per the previous section, by either downloading a release or using go get
to build it.
NOTE: The
clusterawsadm
command requires to have a working AWS environment.
NOTE:
Your credentials must let you make changes in AWS Identity and Access Management (IAM), and use CloudFormation.
export AWS_REGION=us-east-1
clusterawsadm alpha bootstrap create-stack
You will need to specify the name of an existing SSH key pair within the region you plan on using. If you don't have one yet, a new one needs to be created.
Bash:
# Save the output to a secure location
aws ec2 create-key-pair --key-name cluster-api-provider-aws.sigs.k8s.io | jq .KeyMaterial -r
-----BEGIN RSA PRIVATE KEY-----
[... contents omitted ...]
-----END RSA PRIVATE KEY-----
PowerShell:
(New-EC2KeyPair -KeyName cluster-api-provider-aws.sigs.k8s.io).KeyMaterial
-----BEGIN RSA PRIVATE KEY-----
[... contents omitted ...]
-----END RSA PRIVATE KEY-----
If you want to save the private key directly into AWS Systems Manager Parameter Store with KMS encryption for security, you can use the following command:
Bash:
aws ssm put-parameter --name "/sigs.k8s.io/cluster-api-provider-aws/ssh-key" \
--type SecureString \
--value "$(aws ec2 create-key-pair --key-name cluster-api-provider-aws.sigs.k8s.io | jq .KeyMaterial -r)"
{
"Version": 1
}
PowerShell:
Write-SSMParameter -Name "/sigs.k8s.io/cluster-api-provider-aws/ssh-key" `
-Type SecureString `
-Value (New-EC2KeyPair -KeyName cluster-api-provider-aws.sigs.k8s.io).KeyMaterial
1
Bash:
# Replace with your own public key
aws ec2 import-key-pair --key-name cluster-api-provider-aws.sigs.k8s.io \
--public-key-material $(cat ~/.ssh/id_rsa.pub)
PowerShell:
$publicKey = [System.Convert]::ToBase64String( `
[System.Text.Encoding]::UTF8.GetBytes(((get-content ~/.ssh/id_rsa.pub))))
Import-EC2KeyPair -KeyName cluster-api-provider-aws.sigs.k8s.io -PublicKeyMaterial $publicKey
NOTE:
Only RSA keys are supported by AWS.
Minikube needs to be installed on your local machine, as this is what will be used by the Cluster API to bootstrap your cluster in AWS.
Instructions for setting up minikube are available on the Kubernetes website.
NOTE
minikube start
is NOT idempotent, and running it twice will likely damage yourminikube
. Sinceclusterctl
runsminikube start
, it is important to runminikube delete
prior toclusterctl create
. See "troubleshooting" below for more on how to recover from runningclusterctl create
with an already runningminikube
.
At present, the Cluster API provider runs minikube to create a new instance, but requires Kubernetes 1.12 and the kubeadm bootstrap method to work properly, so we configure Minikube as follows:
minikube config set kubernetes-version v1.12.1
minikube config set bootstrapper kubeadm
If you already had a running minikube
, be sure to remove it:
minikube delete
The current iteration of the Cluster API Provider AWS relies on credentials being present in your environment. These then get written into the cluster manifests for use by the controllers.
Bash:
# Region used to deploy the cluster in.
export AWS_REGION=us-east-1
# User access credentials.
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
# SSH Key to be used to run instances.
export SSH_KEY_NAME="cluster-api-provider-aws.sigs.k8s.io"
PowerShell:
$ENV:AWS_REGION = "us-east-1"
$ENV:AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
$ENV:AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
$ENV:SSH_KEY_NAME="cluster-api-provider-aws.sigs.k8s.io"
If you applied the CloudFormation template above, an IAM user was created for you:
Bash:
export AWS_CREDENTIALS=$(aws iam create-access-key \
--user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io)
export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)
PowerShell:
$awsCredentials = New-IAMAccessKey -UserName bootstrapper.cluster-api-provider-aws.sigs.k8s.io
$ENV:AWS_ACCESS_KEY_ID=$awsCredentials.AccessKeyId
$ENV:AWS_SECRET_ACCESS_KEY=$awsCredentials.SecretAccessKey
NOTE:
To save credentials securely in your environment, aws-vault uses the OS keystore as permanent storage, and offers shell features to securely expose and setup local AWS environments.
There is a make target manifests
that can be used to generate the
cluster manifests.
make manifests
Then edit cmd/clusterctl/examples/aws/out/cluster.yaml
and cmd/clusterctl/examples/aws/out/machine.yaml
. Ensure that the keyName
is set to the cluster-api-provider-aws.sigs.k8s.io
we set up above. This is also an opportunity to edit the AWS
region and any apply other customisations you want to make.
Note: The generated manifests may refer to a keypair named
default
, which differs from the keypair created in this guide. That can be overridden by setting theSSH_KEY_NAME
env var before runningmake manifests
.
If you haven't already, set up your environment in the [terminal session] you're working in.
You can now start the Cluster API controllers and deploy a new cluster in AWS:
Bash:
clusterctl create cluster -v2 --provider aws \
-m ./cmd/clusterctl/examples/aws/out/machines.yaml \
-c ./cmd/clusterctl/examples/aws/out/cluster.yaml \
-p ./cmd/clusterctl/examples/aws/out/provider-components.yaml \
-a ./cmd/clusterctl/examples/aws/out/addons.yaml
I1018 01:21:12.079384 16367 clusterdeployer.go:94] Creating bootstrap cluster
I1018 01:21:12.106882 16367 clusterdeployer.go:111] Applying Cluster API stack to bootstrap cluster
I1018 01:21:12.106901 16367 clusterdeployer.go:300] Applying Cluster API Provider Components
I1018 01:21:12.106909 16367 clusterclient.go:505] Waiting for kubectl apply...
I1018 01:21:12.460755 16367 clusterclient.go:533] Waiting for Cluster v1alpha resources to become available...
I1018 01:21:12.464840 16367 clusterclient.go:546] Waiting for Cluster v1alpha resources to be listable...
I1018 01:21:12.517706 16367 clusterdeployer.go:116] Provisioning target cluster via bootstrap cluster
I1018 01:21:12.517722 16367 clusterdeployer.go:118] Creating cluster object aws-provider-test1 on bootstrap cluster in namespace "aws-provider-system"
I1018 01:21:12.524912 16367 clusterdeployer.go:123] Creating master in namespace "aws-provider-system"
PowerShell:
clusterctl create cluster -v2 --provider aws `
-m ./cmd/clusterctl/examples/aws/out/machines.yaml `
-c ./cmd/clusterctl/examples/aws/out/cluster.yaml `
-p ./cmd/clusterctl/examples/aws/out/provider-components.yaml `
-a ./cmd/clusterctl/examples/aws/out/addons.yaml
I1018 01:21:12.079384 16367 clusterdeployer.go:94] Creating bootstrap cluster
I1018 01:21:12.106882 16367 clusterdeployer.go:111] Applying Cluster API stack to bootstrap cluster
I1018 01:21:12.106901 16367 clusterdeployer.go:300] Applying Cluster API Provider Components
...
The created minikube cluster is ephemeral and should be deleted on cluster creation success. During the cluster creation, the minikube configuration is written to minikube.kubeconfig
in the directory you launched the clusterctl
command.
For a more in-depth look into what clusterctl
is doing during this create
step, please see the clusterctl document.
minikube logs -f
will tail the logs for the bootstrap cluster's bootstrap. If you see a message like the following:
Oct 30 16:52:13 minikube kubelet[3055]: E1030 16:52:13.023286 3055 pod_workers.go:186] Error syncing pod 9037f4a5-dc63-11e8-9de5-0800270170d7 ("kube-proxy-qj7x5_kube-system(9037f4a5-dc63-11e8-9de5-0800270170d7)"), skipping: failed to "StartContainer" for "kube-proxy" with ErrImagePull: "rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): operation not supported"
Then it will be necessary to run these commands to recover:
minikube delete
sudo rm -rf ~/.minikube
Be sure to re-configure minikube as described in the Customizing for Cluster API section.
Controller logs can be tailed using kubectl
:
Bash:
export KUBECONFIG=./minikube.kubeconfig
kubectl get po -o name -n aws-provider-system | grep aws-provider-controller-manager | xargs kubectl logs -n aws-provider-system -c manager -f
PowerShell:
$ENV:KUBECONFIG = "minikube.kubeconfig"
kubectl logs -n aws-provider-system -c manager -f `
$(kubectl get po -o name | Select-String -Pattern "aws-provider-controller-manager")