-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to reproduce #19179 #19191
base: main
Are you sure you want to change the base?
Try to reproduce #19179 #19191
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: serathius The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
9cc87ab
to
9e95d30
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted filessee 26 files with indirect coverage changes @@ Coverage Diff @@
## main #19191 +/- ##
==========================================
+ Coverage 68.77% 68.78% +0.01%
==========================================
Files 420 420
Lines 35641 35629 -12
==========================================
- Hits 24511 24507 -4
+ Misses 9707 9700 -7
+ Partials 1423 1422 -1 Continue to review full report in Codecov by Sentry.
|
9e95d30
to
5065b46
Compare
Hi @serathius I can't reproduce it in my local. But I used to https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/directory/pull-etcd-robustness-amd64/1877585036438409216 There were only two kinds of requests before compaction panic. One is creation and other is deletion. I am still investigating why there is no update before compaction panic. Hope that information helps |
5065b46
to
1c2d615
Compare
Signed-off-by: Marek Siarkowicz <[email protected]>
1c2d615
to
54813cf
Compare
@serathius: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
I think I can guess the reason, in Kubernetes traffic operations are based on local state that is feed from watch. It tries to balance number of objects within the storage to keep the average. If the watch was very delayed there would be a long time where the traffic would execute only creates as it would not be aware of any objects. When watch would caught up, the traffic would skip random operations and immediately go for deletions, as there were too many objects in storage. Still no progress on reproduction. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.
#19179