We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
but, the test psnr seems normal
The text was updated successfully, but these errors were encountered:
你好,你训练代码跑起来了吗? 报错如下: ModuleNotFoundError: No module named 'basicsr' 这个basicsr是项目文件夹,导入为沙会报错阿? 报错如下: ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28610) of binary: /home/xtzg/anaconda3/envs/pytorch1.1 我的设置: yaml配置文件中也修改shuffle为false # data loader use_shuffle: false # true num_worker_per_gpu: 0 # 8 batch_size_per_gpu: 1 # 8 train.sh中设置为: python -m torch.distributed.run --nproc_per_node=1 --master_port=4321 basicsr/train.py -opt $CONFIG --launcher pytorch
ModuleNotFoundError:
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28610) of binary: /home/xtzg/anaconda3/envs/pytorch1.1
# data loader use_shuffle: false # true num_worker_per_gpu: 0 # 8 batch_size_per_gpu: 1 # 8
python -m torch.distributed.run --nproc_per_node=1 --master_port=4321 basicsr/train.py -opt $CONFIG --launcher pytorch
Sorry, something went wrong.
No branches or pull requests
but, the test psnr seems normal
The text was updated successfully, but these errors were encountered: