-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A random bug #9
Comments
Replacing the |
I meet this problem too. And modify int to np.int, this error still happens. |
Did you solve this problem? @liumarcus70s |
Hi @JackLongKing, Could you print the value of |
Information Flow as follows: |
These values seem inconsistent with utils/imutils.py#L94-L95. |
Print code as follows: pt: tensor([ 49.ValueError: Traceback (most recent call last): |
These values are so weird. Given these values, both
|
Yes, try...except was used in utils/imutils.py, and then met another problem, out of memory, which needs another try. My device is Titan X(12GB). My log as follows and thank you for your help! @HongwenZhang Epoch: 1 | LR: 0.00025000 |
The error of 'out of memory' is out of the scope of this issue. |
Hi everyone,
When I train the net, I got a random bug. An error will occur in random bench
Processing |########################## | (50860/61225) Data: 2.597300s | Batch: 3.278s | Total: 0:56:45 |Processing |########################## | (50880/61225) Data: 0.000299s | Batch: 0.681s | Total: 0:56:46 |Processing |########################## | (50900/61225) Data: 0.000489s | Batch: 0.691s | Total: 0:56:47 |Processing |########################## | (50920/61225) Data: 0.000502s | Batch: 0.683s | Total: 0:56:47 |Processing |########################## | (50940/61225) Data: 2.483688s | Batch: 3.165s | Total: 0:56:50 | ETA: 0:10:09 | LOSS vox: 0.0337; coord: 0.0034 | NME: 0.3116Traceback (most recent call last):
File "train.py", line 281, in
main(parser.parse_args())
File "train.py", line 90, in main
run(model, train_loader, mode, criterion_vox, criterion_coord, optimizer_G, optimizer_P)
File "train.py", line 144, in run
for i, (inputs, target, meta) in enumerate(data_loader):
File "/home/jliu9/anaconda3/envs/jvcr/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 623, in next
return self._process_next_batch(batch)
File "/home/jliu9/anaconda3/envs/jvcr/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
ValueError: Traceback (most recent call last):
File "/home/jliu9/anaconda3/envs/jvcr/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/jliu9/Codes/JVCR-3Dlandmark/datasets/fa68pt3D.py", line 151, in getitem
target_j = draw_labelvolume(target_j, tpts[j] - 1, self.sigma, type=self.label_type)
File "/home/jliu9/Codes/JVCR-3Dlandmark/utils/imutils.py", line 123, in draw_labelvolume
img[img_y[0]:img_y[1], img_x[0]:img_x[1]] = g[g_y[0]:g_y[1], g_x[0]:g_x[1]]
ValueError: could not broadcast input array from shape (7,7) into shape (7,8)
So, what's the problem?
The text was updated successfully, but these errors were encountered: