You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm able to successfully build with this command bazel run //markdown/demo:demo_local_runner -- --training_type=batch but I came across gRPC related error like the following. Does this occur to you before? Any idea on the solution?
INFO:tensorflow:loss = 1.1790854, step = 1952
I1108 21:10:20.894386 140460748154688 basic_session_run_hooks.py:262] loss = 1.1790854, step = 1952
INFO:tensorflow:loss = 1.2298307, step = 2152 (18.186 sec)
I1108 21:10:39.080899 140460748154688 basic_session_run_hooks.py:260] loss = 1.2298307, step = 2152 (18.186 sec)
I1108 21:10:47.662103 140675923150656 cpu_training.py:374] MetricsHeartBeat thread stopped
I1108 21:10:47.664155 140675923150656 cpu_training.py:1712] Try to shutdown ps 0
I1108 21:10:47.677361 140269666805568 cpu_training.py:1776] Ps 0 shutdown successfully!
I1108 21:10:47.677551 140675923150656 cpu_training.py:1718] Shutdown ps 0 successfully!
I1108 21:10:47.677928 140675923150656 cpu_training.py:1712] Try to shutdown ps 1
I1108 21:10:47.678347 140269666805568 cpu_training.py:2158] Finished ps 0.
I1108 21:10:47.678776 140269666805568 runner_utils.py:396] exit monolith_discovery!
I1108 21:10:47.684976 140603018831680 cpu_training.py:1776] Ps 1 shutdown successfully!
I1108 21:10:47.685158 140675923150656 cpu_training.py:1718] Shutdown ps 1 successfully!
I1108 21:10:47.685652 140603018831680 cpu_training.py:2158] Finished ps 1.
I1108 21:10:47.686046 140603018831680 runner_utils.py:396] exit monolith_discovery!
I1108 21:10:47.693424 140675923150656 cpu_training.py:2155] Worker End 1699477847.693356, Cost: 30.059291124343872(s)
I1108 21:10:47.693858 140675923150656 cpu_training.py:2158] Finished worker 0.
I1108 21:10:47.694137 140675923150656 runner_utils.py:396] exit monolith_discovery!
2023-11-08 21:10:48.412364: W external/org_tensorflow/tensorflow/core/distributed_runtime/rpc/grpc_worker_service.cc:514] RecvTensor cancelled for 128048405063079430
2023-11-08 21:10:48.412458: W external/org_tensorflow/tensorflow/core/distributed_runtime/rpc/grpc_worker_service.cc:514] RecvTensor cancelled for 128048405063079430
2023-11-08 21:10:48.412479: W external/org_tensorflow/tensorflow/core/distributed_runtime/rpc/grpc_worker_service.cc:514] RecvTensor cancelled for 128048405063079430
2023-11-08 21:10:48.412496: W external/org_tensorflow/tensorflow/core/distributed_runtime/rpc/grpc_worker_service.cc:514] RecvTensor cancelled for 128048405063079430
2023-11-08 21:10:48.412575: I external/org_tensorflow/tensorflow/core/distributed_runtime/worker.cc:207] Cancellation requested for RunGraph.
2023-11-08 21:10:48.412993: W external/org_tensorflow/tensorflow/core/distributed_runtime/rpc/grpc_worker_service.cc:514] RecvTensor cancelled for 128048405063079430
INFO:tensorflow:An error was raised. This may be due to a preemption in a connected worker or parameter server. The current session will be closed and a new session will be created. This error may also occur due to a gRPC failure caused by high memory or network bandwidth usage in the parameter servers. If this error occurs repeatedly, try increasing the number of parameter servers assigned to the job. Error: From /job:ps/replica:0/task:1:
Socket closed
Additional GRPC error information from remote target /job:ps/replica:0/task:1:
:{"created":"@1699477848.412335053","description":"Error received from peer ipv4:10.128.0.74:34391","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}
I1108 21:10:48.415307 140460748154688 monitored_session.py:1285] An error was raised. This may be due to a preemption in a connected worker or parameter server. The current session will be closed and a new session will be created. This error may also occur due to a gRPC failure caused by high memory or network bandwidth usage in the parameter servers. If this error occurs repeatedly, try increasing the number of parameter servers assigned to the job. Error: From /job:ps/replica:0/task:1:
Socket closed
Additional GRPC error information from remote target /job:ps/replica:0/task:1:
:{"created":"@1699477848.412335053","description":"Error received from peer ipv4:10.128.0.74:34391","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}
Hi team,
I'm able to successfully build with this command
bazel run //markdown/demo:demo_local_runner -- --training_type=batch
but I came across gRPC related error like the following. Does this occur to you before? Any idea on the solution?Related specs:
The text was updated successfully, but these errors were encountered: