Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-48334] [CORE] Release RpcEnv when SparkContext Initialization Fails #49292

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

soumasish
Copy link

What changes were proposed in this pull request?

If a SparkContext fails during initialization (for example, due to invalid Spark configs), the driver’s RpcEnv may have already started but never gets stopped, because SparkContext’s _env field is still null and _env.stop() is not invoked in the catch block. As a result, the RPC server port remains bound indefinitely in that process. This PR adds logic to ensure that if the RPC server is partially created and a SparkContext constructor error occurs, we properly shut down the RpcEnv so it does not remain open.

Why are the changes needed?

  • Fixes a resource leak where an RPC port remains bound if SparkContext’s initialization fails.
  • Ensures consistent cleanup in error scenarios, preventing indefinite port binding for a failed SparkContext.

Does this PR introduce any user-facing change?

No. Internal fix only; no visible API changes.

How was this patch tested?

  • WIP

Was this patch authored or co-authored using generative AI tooling?

No

@github-actions github-actions bot added the CORE label Dec 25, 2024
@soumasish soumasish marked this pull request as draft December 25, 2024 21:14
@@ -721,6 +723,15 @@ class SparkContext(config: SparkConf) extends Logging {
} catch {
case NonFatal(e) =>
logError("Error initializing SparkContext.", e)
if(_env == null && envCreated != null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can this happen? I think the issue has to be understood thoroughly

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HyukjinKwon I looked through the SparkContext creation again and here are my thoughts. There is a window where SparkEnv can be partially initialized (thus starting the RPC server) yet never assigned to _env in time for the usual shutdown logic to call _env.stop().
By stopping envCreated directly if _env remains null on exception, we can ensure that the RPC server (and other env resources) get shut down. That prevents the leftover port-blocking issue.
Please correct me if I'm wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants