You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way I do it in EVA or MMSegmentation is that I do 4 classes (1 background + 3), reduce_zero_label = False and ignore_index=0 in every loss function (CE and Dice in this case), I do those steps and and my training works in other libraries, is this method generally wrong as I can not train properly in ViT-Adapter with this? I am not seeing any error just extremely bad predictions at very easy task
I also have confusion with how stuff + things work, when I do 1 stuff and 3 things the num_classes goes to 5 instead of 4?
Have you solved this problem, I have a similar problem when I do semantic segmentation, there is no problem, it can be trained correctly on deit, but on beit+mask2former, the prediction in each category is close to 0
The way I do it in EVA or MMSegmentation is that I do 4 classes (1 background + 3), reduce_zero_label = False and ignore_index=0 in every loss function (CE and Dice in this case), I do those steps and and my training works in other libraries, is this method generally wrong as I can not train properly in ViT-Adapter with this? I am not seeing any error just extremely bad predictions at very easy task
I also have confusion with how stuff + things work, when I do 1 stuff and 3 things the num_classes goes to 5 instead of 4?
@czczup Please help
The text was updated successfully, but these errors were encountered: