Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assert unachieved_thresholds + duplicate_thresholds + len(thresh_metrics) == self.num_thresholds #212

Open
osmosishk opened this issue Nov 7, 2024 · 0 comments

Comments

@osmosishk
Copy link

Traceback (most recent call last):
File "./tools/test.py", line 261, in
main()
File "./tools/test.py", line 257, in main
print(dataset.evaluate(outputs, **eval_kwargs))
File "/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 1060, in evaluate
results_dict = self._evaluate_single(
File "/UniAD/projects/mmdet3d_plugin/datasets/nuscenes_e2e_dataset.py", line 1180, in _evaluate_single
self.nusc_eval_track.main()
File "/usr/local/lib/python3.8/dist-packages/nuscenes/eval/tracking/evaluate.py", line 205, in main
metrics, metric_data_list = self.evaluate()
File "/usr/local/lib/python3.8/dist-packages/nuscenes/eval/tracking/evaluate.py", line 135, in evaluate
accumulate_class(class_name)
File "/usr/local/lib/python3.8/dist-packages/nuscenes/eval/tracking/evaluate.py", line 131, in accumulate_class
curr_md = curr_ev.accumulate()
File "/usr/local/lib/python3.8/dist-packages/nuscenes/eval/tracking/algo.py", line 161, in accumulate
assert unachieved_thresholds + duplicate_thresholds + len(thresh_metrics) == self.num_thresholds
AssertionError
/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 633) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 193, in
main()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:


     ./tools/test.py FAILED        

=======================================
Root Cause:
[0]:
time: 2024-11-07_15:01:27
rank: 0 (local_rank: 0)
exitcode: 1 (pid: 633)
error_file: <N/A>
msg: "Process failed with exitcode 1"

Other Failures:
<NO_OTHER_FAILURES>


What will be the issue of this error , when i use the command
./tools/uniad_dist_eval.sh ./projects/configs/stage1_track_map/base_track_map.py ./ckpts/uniad_base_track_map.pth 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant