-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 file asset repository URL validation #16760
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Client version: 1.29.2 (git-v1.29.2)
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.Server Version: v1.29.7
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
We are configuring local file asset repository however we are running into an issue when trying to update the cluster.
We tried to work around #16759 by specifying
fileRepository
as an S3 URL (even though the docs suggest this should not work) and to my surprise kOps accepted it and allowed us to apply it to the cluster.However upon rolling the first control-plane node it did not come online and the update failed (some what expected).
fileRepository
in the Cluster spec (using an S3 URL as shown below)kops get assets --copy
kops update cluster
kops rolling-update cluster
5. What happened after the commands executed?
New node fails to join the cluster and cluster validation fails.
Upon SSH'ing into the new node and checking the logs via
journalctl -u cloud-final.service
we see:6. What did you expect to happen?
I expected the validation of a S3 URL in
fileRepository
to fail before we could apply the changes to the cluster, and have new nodes failing to start.7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The text was updated successfully, but these errors were encountered: