Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate worker (Celery?) issues #2697

Open
lbarcziova opened this issue Jan 15, 2025 · 7 comments
Open

Investigate worker (Celery?) issues #2697

lbarcziova opened this issue Jan 15, 2025 · 7 comments
Assignees
Labels
area/general Related to whole service, not a specific part/integration. complexity/single-task Regular task, should be done within days. gain/high This brings a lot of value to (not strictly a lot of) users. impact/high This issue impacts multiple/lot of users. kind/bug Something isn't working.

Comments

@lbarcziova
Copy link
Member

lbarcziova commented Jan 15, 2025

After the previous week redeployment (6th January), we have started hitting issues with jobs processing, causing tasks not being processed for some time and delays:

  • we could see restarts of the worker pods caused by hitting CPU limits - tried to mitigate this by increasing the CPU limits (Increase cpu limit to handle spikes better deployment#631), the limit was also increased for postgres (Increase cpu limits for postgres deployment#636) where metrics showed also going above limit
  • sometimes, the tasks are not being processed at all in workers, without any task blocking them
  • we could see in logs messages like:
    • Substantial drift from celery@packit-worker-long-running-0 may mean clocks are out of sync. Current drift is 1799 seconds. [orig: 2025-01-14 14:51:59.656603 recv: 2025-01-14 14:22:00.484181]
    • consumer: Connection to broker lost. Trying to re-establish the connection... followed by a restart
@lbarcziova lbarcziova added kind/bug Something isn't working. complexity/single-task Regular task, should be done within days. impact/high This issue impacts multiple/lot of users. area/general Related to whole service, not a specific part/integration. gain/high This brings a lot of value to (not strictly a lot of) users. labels Jan 15, 2025
@nforro
Copy link
Member

nforro commented Jan 15, 2025

AFAICT it only affects short-running workers.

  • Substantial drift from celery@packit-worker-long-running-0 may mean clocks are out of sync. Current drift is 1799 seconds. [orig: 2025-01-14 14:51:59.656603 recv: 2025-01-14 14:22:00.484181]

This matches the period of time when no tasks were being processed on that particular worker. There was no such message on the other worker yesterday, but the "idle time" was very similar, and 1799 is almost exactly 30 minutes, it could be just a coincidence, but it seems a bit suspicious to me (but I suppose it could be an interval in which Celery checks for non-responsive workers).

@lbarcziova lbarcziova moved this from new to refined in Packit Kanban Board Jan 15, 2025
@nforro
Copy link
Member

nforro commented Jan 15, 2025

After the previous week redeployment (6th January)

I checked the images and there were no updates of python3-celery, python3-gevent nor eventlet (is there anything else that could affect this?).

@majamassarini
Copy link
Member

majamassarini commented Jan 20, 2025

I was releasing Packit and I had to tag in sidetags both ogr and specfile.
I tagged them using staging instance.

Ogr was quick:

1/20/25
11:01:03.432 AM	
{ [-]
   kubernetes: { [-]
     container_id: cri-o://3e9d68e5cabf75270f05afb43d9f7bff523c45690fbb083b9646ab0119b56132
     container_image: quay.io/packit/packit-worker@sha256:efdd08d40650696a907bbfa0ed2350c238d3a3b5e7d935406b32c24dc720698e
     container_image_id: quay.io/packit/packit-worker@sha256:efdd08d40650696a907bbfa0ed2350c238d3a3b5e7d935406b32c24dc720698e
     container_name: packit-worker
     labels: { [-]
       component: packit-worker-short-running
       controller-revision-hash: packit-worker-short-running-6d4b46669
       paas_redhat_com_appcode: PCKT-002
       statefulset_kubernetes_io_pod-name: packit-worker-short-running-0
     }
     namespace_name: packit--stg
     pod_id: 8e80213d-6467-4004-932b-195c23c7bd29
     pod_ip: 172.20.212.235
     pod_name: packit-worker-short-running-0
     pod_owner: StatefulSet/packit-worker-short-running
   }
   level: info
   log_type: application
   message: [2025-01-20 11:01:03,432: INFO/MainProcess] Task task.steve_jobs.process_message[3f191fbc-b618-40e1-99b1-b93955f38d1e] received
   openshift: { [-]
     cluster_id: d760ecc9-5098-480a-a88f-8913d8ab0ee7
     labels: { [+]
     }
     sequence: 1737370863921404400
   }
}
Show as raw text
1/20/25
11:01:03.430 AM	
{ [-]
   kubernetes: { [-]
     container_id: cri-o://ef7f4ffe934db49e3176a1771af0b836b3904bd38f7406e23068db17f9609ed6
     container_image: quay.io/packit/packit-service-fedmsg@sha256:f349e711067c3d24e6dcfbbce783a4ea91af1971ef4e3d076ba36506459f31f7
     container_image_id: quay.io/packit/packit-service-fedmsg@sha256:f349e711067c3d24e6dcfbbce783a4ea91af1971ef4e3d076ba36506459f31f7
     container_name: packit-service-fedmsg
     labels: { [-]
       component: packit-service-fedmsg
       paas_redhat_com_appcode: PCKT-002
     }
     namespace_name: packit--stg
     pod_id: fe77a366-0323-4a96-a125-b1c6c3d1be66
     pod_ip: 172.20.20.37
     pod_name: packit-service-fedmsg-748f798f9c-j6svq
     pod_owner: ReplicaSet/packit-service-fedmsg-748f798f9c
   }
   level: debug
   log_type: application
   message: [2025-01-20 11:01:03,430 DEBUG packit_service_fedmsg.consumer] Task UUID=3f191fbc-b618-40e1-99b1-b93955f38d1e sent to Celery
   openshift: { [-]
     cluster_id: d760ecc9-5098-480a-a88f-8913d8ab0ee7
     labels: { [+]
     }
     sequence: 1737370863432772400
   }
}
Show as raw text

Specfile took almost 30 minutes to react:

1/20/25
11:31:23.439 AM	
{ [-]
   kubernetes: { [-]
     container_id: cri-o://3e9d68e5cabf75270f05afb43d9f7bff523c45690fbb083b9646ab0119b56132
     container_image: quay.io/packit/packit-worker@sha256:efdd08d40650696a907bbfa0ed2350c238d3a3b5e7d935406b32c24dc720698e
     container_image_id: quay.io/packit/packit-worker@sha256:efdd08d40650696a907bbfa0ed2350c238d3a3b5e7d935406b32c24dc720698e
     container_name: packit-worker
     labels: { [-]
       component: packit-worker-short-running
       controller-revision-hash: packit-worker-short-running-6d4b46669
       paas_redhat_com_appcode: PCKT-002
       statefulset_kubernetes_io_pod-name: packit-worker-short-running-0
     }
     namespace_name: packit--stg
     pod_id: 8e80213d-6467-4004-932b-195c23c7bd29
     pod_ip: 172.20.212.235
     pod_name: packit-worker-short-running-0
     pod_owner: StatefulSet/packit-worker-short-running
   }
   level: info
   log_type: application
   message: [2025-01-20 11:31:23,439: INFO/MainProcess] Task task.steve_jobs.process_message[faf4f880-1437-470f-9121-9516c1162e43] received
   openshift: { [-]
     cluster_id: d760ecc9-5098-480a-a88f-8913d8ab0ee7
     labels: { [-]
       cluster_id: preprod-spoke-aws-us-east-1
       paas_cluster_appcode: itos-008
       paas_cluster_cloud_provider: aws
       paas_cluster_flavor: mpp
       paas_cluster_id: preprod-spoke-aws-us-east-1
       paas_cluster_name: spoke
       paas_cluster_region: us-east-1
       paas_cluster_service_phase: preprod
     }
     sequence: 1737372683842993200
   }
}
Show as raw text
1/20/25
11:05:13.704 AM	
{ [-]
   kubernetes: { [-]
     container_id: cri-o://ef7f4ffe934db49e3176a1771af0b836b3904bd38f7406e23068db17f9609ed6
     container_image: quay.io/packit/packit-service-fedmsg@sha256:f349e711067c3d24e6dcfbbce783a4ea91af1971ef4e3d076ba36506459f31f7
     container_image_id: quay.io/packit/packit-service-fedmsg@sha256:f349e711067c3d24e6dcfbbce783a4ea91af1971ef4e3d076ba36506459f31f7
     container_name: packit-service-fedmsg
     labels: { [-]
       component: packit-service-fedmsg
       paas_redhat_com_appcode: PCKT-002
     }
     namespace_name: packit--stg
     pod_id: fe77a366-0323-4a96-a125-b1c6c3d1be66
     pod_ip: 172.20.20.37
     pod_name: packit-service-fedmsg-748f798f9c-j6svq
     pod_owner: ReplicaSet/packit-service-fedmsg-748f798f9c
   }
   level: debug
   log_type: application
   message: [2025-01-20 11:05:13,704 DEBUG packit_service_fedmsg.consumer] Task UUID=faf4f880-1437-470f-9121-9516c1162e43 sent to Celery
   openshift: { [-]
     cluster_id: d760ecc9-5098-480a-a88f-8913d8ab0ee7
     labels: { [-]
       cluster_id: preprod-spoke-aws-us-east-1
       paas_cluster_appcode: itos-008
       paas_cluster_cloud_provider: aws
       paas_cluster_flavor: mpp
       paas_cluster_id: preprod-spoke-aws-us-east-1
       paas_cluster_name: spoke
       paas_cluster_region: us-east-1
       paas_cluster_service_phase: preprod
     }
     sequence: 1737371114150410800
   }
}
Show as raw text

@nforro
Copy link
Member

nforro commented Jan 20, 2025

So we are hitting this on stage as well?

@majamassarini
Copy link
Member

So we are hitting this on stage as well?

yes, exactly!

@majamassarini
Copy link
Member

majamassarini commented Jan 20, 2025

This are the most suspicious messages I found in the logs for the short-running-worker that didn't process any task in 30 minutes.

{ [-]
   kubernetes: { [+]
   }
   level: info
   log_type: application
   message: [2025-01-20 11:01:10,750: INFO/MainProcess] missed heartbeat from celery@packit-worker-long-running-0
   openshift: { [+]
   }
}
"[2025-01-20 11:31:08,202: CRITICAL/MainProcess] Couldn't ack '47024a1a-4c14-41b4-912c-1fd82ea41028', reason:ConnectionError('Connection closed by server.')"
"  File ""/usr/lib/python3.13/site-packages/kombu/message.py"", line 126, in ack"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/virtual/base.py"", line 670, in basic_ack"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/redis.py"", line 380, in ack"
"  File ""/usr/lib/python3.13/site-packages/sentry_sdk/integrations/redis/_sync_common.py"", line 54, in sentry_patched_execute"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1530, in execute"
"  File ""/usr/lib/python3.13/site-packages/redis/retry.py"", line 65, in call_with_retry"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1532, in <lambda>"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1508, in _disconnect_raise_reset"
"  File ""/usr/lib/python3.13/site-packages/redis/retry.py"", line 62, in call_with_retry"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1531, in <lambda>"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1375, in _execute_transaction"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 1462, in parse_response"
"  File ""/usr/lib/python3.13/site-packages/redis/client.py"", line 584, in parse_response"
"  File ""/usr/lib/python3.13/site-packages/redis/connection.py"", line 592, in read_response"
"  File ""/usr/lib/python3.13/site-packages/redis/_parsers/resp2.py"", line 15, in read_response"
"  File ""/usr/lib/python3.13/site-packages/redis/_parsers/resp2.py"", line 25, in _read_response"
"  File ""/usr/lib/python3.13/site-packages/redis/_parsers/socket.py"", line 115, in readline"
"  File ""/usr/lib/python3.13/site-packages/redis/_parsers/socket.py"", line 68, in _read_from_socket"
"redis.exceptions.ConnectionError: Connection closed by server."
"[2025-01-20 11:31:08,258: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection..."
"  File ""/usr/lib/python3.13/site-packages/celery/bootsteps.py"", line 116, in start"
"  File ""/usr/lib/python3.13/site-packages/celery/worker/consumer/consumer.py"", line 742, in start"
"  File ""/usr/lib/python3.13/site-packages/celery/worker/loops.py"", line 130, in synloop"
"  File ""/usr/lib/python3.13/site-packages/kombu/connection.py"", line 341, in drain_events"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/virtual/base.py"", line 997, in drain_events"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/redis.py"", line 584, in get"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/redis.py"", line 525, in _register_BRPOP"
"  File ""/usr/lib/python3.13/site-packages/kombu/transport/redis.py"", line 957, in _brpop_start"
"  File ""/usr/lib/python3.13/site-packages/redis/connection.py"", line 556, in send_command"
"  File ""/usr/lib/python3.13/site-packages/redis/connection.py"", line 529, in send_packed_command"
"  File ""/usr/lib/python3.13/site-packages/redis/connection.py"", line 521, in check_health"
"  File ""/usr/lib/python3.13/site-packages/redis/retry.py"", line 67, in call_with_retry"
"  File ""/usr/lib/python3.13/site-packages/redis/retry.py"", line 62, in call_with_retry"
"  File ""/usr/lib/python3.13/site-packages/redis/connection.py"", line 512, in _send_ping"
"redis.exceptions.ConnectionError: Bad response from PING health check"
"[2025-01-20 11:31:08,264: WARNING/MainProcess] Restoring 1 unacknowledged message(s)"
"[2025-01-20 11:31:08,280: INFO/MainProcess] Temporarily reducing the prefetch count to 1 to avoid over-fetching since 0 tasks are currently being processed."
"The prefetch count will be gradually restored to 0 as the tasks complete processing."
"[2025-01-20 11:31:08,280: DEBUG/MainProcess] | Consumer: Restarting event loop..."
"[2025-01-20 11:31:08,280: DEBUG/MainProcess] | Consumer: Restarting Heart..."
"[2025-01-20 11:31:08,282: DEBUG/MainProcess] | Consumer: Restarting Control..."
"[2025-01-20 11:31:08,282: DEBUG/MainProcess] Waiting for broadcast thread to shutdown..."
"[2025-01-20 11:31:08,293: DEBUG/MainProcess] | Consumer: Restarting Tasks..."
"[2025-01-20 11:31:08,293: DEBUG/MainProcess] Canceling task consumer..."
"[2025-01-20 11:31:08,293: DEBUG/MainProcess] | Consumer: Restarting Gossip..."
"[2025-01-20 11:31:08,293: DEBUG/MainProcess] | Consumer: Restarting Mingle..."
"[2025-01-20 11:31:08,294: DEBUG/MainProcess] | Consumer: Restarting Events..."
"[2025-01-20 11:31:08,294: DEBUG/MainProcess] | Consumer: Restarting Connection..."
"[2025-01-20 11:31:08,294: DEBUG/MainProcess] | Consumer: Starting Connection"
"[2025-01-20 11:31:08,307: INFO/MainProcess] Connected to redis://valkey:6379/0"
"[2025-01-20 11:31:08,307: DEBUG/MainProcess] ^-- substep ok"
"[2025-01-20 11:31:08,307: DEBUG/MainProcess] | Consumer: Starting Events"
"[2025-01-20 11:31:08,310: DEBUG/MainProcess] ^-- substep ok"
"[2025-01-20 11:31:08,310: DEBUG/MainProcess] | Consumer: Starting Mingle"
"[2025-01-20 11:31:08,310: INFO/MainProcess] mingle: searching for neighbors"
"[2025-01-20 11:32:02,715: INFO/MainProcess] task.steve_jobs.process_message[8c91f702-86d2-4496-bad4-964d18c4897a] Pagure PR comment event, topic: org.fedoraproject.prod.pagure.pull-request.comment.added"

And this is the strange log for the long-running-worker not responding to heartbeat check.

"[2025-01-20 10:59:02,478: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Removing contents of the PV."
"[2025-01-20 10:59:02,478: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Running handler <packit_service.worker.handlers.distgit.SyncFromDownstream object at 0x7ff0bb29b7a0> for sync_from_downstream"
"[2025-01-20 10:59:02,479: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Instantiation of the repository cache at /repository-cache. New projects will not be added."
"[2025-01-20 10:59:02,479: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Attributes requested: repo_name, namespace, git_repo, full_name"
"[2025-01-20 10:59:02,479: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Attributes requested not to be calculated: "
"[2025-01-20 10:59:02,479: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Transitive dependencies: repo_name => set(), namespace => set(), git_repo => set(), full_name => set(), git_project => set(), git_url => set(), working_dir => set(), git_service => set()"
"[2025-01-20 10:59:02,480: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] To-calculate set: {'full_name', 'git_url', 'git_repo', 'git_service', 'repo_name', 'namespace'}"
"[2025-01-20 10:59:02,698: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Parsed remote url 'https://src.fedoraproject.org/rpms/packit.git' from the project PagureProject(namespace=""rpms"", repo=""packit"")."
"[2025-01-20 10:59:02,699: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] `working_dir` is set and `git_repo` is not: let's discover..."
"[2025-01-20 10:59:02,699: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Cloning repo https://src.fedoraproject.org/rpms/packit.git -> /tmp/sandcastle/local-project using repository cache at /repository-cache"
"[2025-01-20 10:59:02,699: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Repositories in the cache (0 project(s)):"
""
"[2025-01-20 10:59:02,700: INFO/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] git clone -v --tags -- https://src.fedoraproject.org/rpms/packit.git /tmp/sandcastle/local-project"
"[2025-01-20 10:59:02,700: DEBUG/ForkPoolWorker-1] TaskName.sync_from_downstream[39a489f0-701e-4fcf-92b3-9af344b7be5a] Popen(['git', 'clone', '-v', '--tags', '--', 'https://src.fedoraproject.org/rpms/packit.git', '/tmp/sandcastle/local-project'], cwd=/src, stdin=None, shell=False, universal_newlines=True)"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453006: ccselect module realm chose cache DIR::/home/packit/kerberos/tkt with client principal [email protected] for server principal HTTP/[email protected]"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453007: Getting credentials [email protected] -> HTTP/koji.fedoraproject.org@ using ccache DIR::/home/packit/kerberos/tkt"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453008: Retrieving [email protected] -> krb5_ccache_conf_data/start_realm@X-CACHECONF: from DIR::/home/packit/kerberos/tkt with result: -1765328243/Matching credential not found (filename: /home/packit/kerberos/tkt)"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453009: Retrieving [email protected] -> HTTP/koji.fedoraproject.org@ from DIR::/home/packit/kerberos/tkt with result: -1765328243/Matching credential not found (filename: /home/packit/kerberos/tkt)"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453010: Retrying [email protected] -> HTTP/[email protected] with result: -1765328243/Matching credential not found (filename: /home/packit/kerberos/tkt)"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453011: Retrieving [email protected] -> krbtgt/[email protected] from DIR::/home/packit/kerberos/tkt with result: 0/Success"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453012: Starting with TGT for client realm: [email protected] -> krbtgt/[email protected]"
"[2025-01-20 11:32:42,453: INFO/ForkPoolWorker-1] [10814] 1737372762.453013: Requesting tickets for HTTP/[email protected], referrals on"
"[2025-01-20 11:32:42,454: INFO/ForkPoolWorker-1] [10814] 1737372762.453014: Generated subkey for TGS request: aes256-cts/E7EA"
"[2025-01-20 11:32:42,454: INFO/ForkPoolWorker-1] [10814] 1737372762.453015: etypes requested in TGS request: aes256-sha2, aes128-sha2, aes256-cts, aes128-cts, camellia256-cts, camellia128-cts"
"[2025-01-20 11:32:42,454: INFO/ForkPoolWorker-1] [10814] 1737372762.453017: Encoding request body and padata into FAST request"
"[2025-01-20 11:32:42,454: INFO/ForkPoolWorker-1] [10814] 1737372762.453018: Sending request (1848 bytes) to FEDORAPROJECT.ORG"
"[2025-01-20 11:32:42,454: INFO/ForkPoolWorker-1] [10814] 1737372762.453019: Resolving hostname id.fedoraproject.org"
"[2025-01-20 11:32:42,476: INFO/ForkPoolWorker-1] [10814] 1737372762.453020: TLS certificate name matched ""id.fedoraproject.org"""
"[2025-01-20 11:32:42,476: INFO/ForkPoolWorker-1] [10814] 1737372762.453021: Sending HTTPS request to https 38.145.60.21:443"
"[2025-01-20 11:32:42,653: INFO/ForkPoolWorker-1] [10814] 1737372762.453022: Received answer (1891 bytes) from https 38.145.60.21:443"
"[2025-01-20 11:32:42,653: INFO/ForkPoolWorker-1] [10814] 1737372762.453023: Terminating TCP connection to https 38.145.60.21:443"
"[2025-01-20 11:32:42,653: INFO/ForkPoolWorker-1] [10814] 1737372762.453024: Sending DNS URI query for _kerberos.FEDORAPROJECT.ORG."
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453025: URI answer: 10 1 ""krb5srv:m:kkdcp:https://id.fedoraproject.org/KdcProxy/"""
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453026: Response was from primary KDC"
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453027: Decoding FAST response"
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453028: FAST reply key: aes256-cts/61AD"
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453029: TGS reply is for [email protected] -> HTTP/[email protected] with session key aes256-cts/5E5B"
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453030: TGS request result: 0/Success"
"[2025-01-20 11:32:42,654: INFO/ForkPoolWorker-1] [10814] 1737372762.453031: Received creds for desired service HTTP/[email protected]"
"[2025-01-20 11:32:42,655: INFO/ForkPoolWorker-1] [10814] 1737372762.453032: Storing [email protected] -> HTTP/koji.fedoraproject.org@ in DIR::/home/packit/kerberos/tkt"
"[2025-01-20 11:32:42,655: INFO/ForkPoolWorker-1] [10814] 1737372762.453033: Creating authenticator for [email protected] -> HTTP/koji.fedoraproject.org@, seqnum 61862160, subkey aes256-cts/4F03, session key aes256-cts/5E5B"
"[2025-01-20 11:32:43,265: INFO/ForkPoolWorker-1] Using packit.spec"
"[2025-01-20 11:32:43,295: INFO/ForkPoolWorker-1] Building packit-1.0.0-1.fc42 for f42-build-side-103953"
"[2025-01-20 11:32:43,307: INFO/ForkPoolWorker-1] Created task: 128206838"
"[2025-01-20 11:32:43,307: INFO/ForkPoolWorker-1] Task info: https://koji.fedoraproject.org/koji/taskinfo?taskID=128206838"
"[2025-01-20 11:32:43,450: INFO/ForkPoolWorker-1] TaskName.downstream_koji_build[03695bb7-019d-4a18-a65d-cc23f6eb6b39] Cleaning up the mess."

Shouldn't sync_from_downstream be disabled?

@nforro
Copy link
Member

nforro commented Jan 20, 2025

Shouldn't sync_from_downstream be disabled?

I thought we got rid of it in the code, but apparently not. It's still enabled in packit and ogr config though.

@lbarcziova lbarcziova self-assigned this Jan 20, 2025
@lbarcziova lbarcziova moved this from refined to in-progress in Packit Kanban Board Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/general Related to whole service, not a specific part/integration. complexity/single-task Regular task, should be done within days. gain/high This brings a lot of value to (not strictly a lot of) users. impact/high This issue impacts multiple/lot of users. kind/bug Something isn't working.
Projects
Status: in-progress
Development

No branches or pull requests

3 participants