Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the semantic ground-truth image segmentation results in HM3D datasets? #2125

Open
zhai-create opened this issue Dec 29, 2024 · 3 comments

Comments

@zhai-create
Copy link

Habitat-Lab and Habitat-Sim versions

Habitat-Lab: v0.3.0
Habitat-Sim: v0.3.0

When I use semantic_img = observations["semantic"] to call the ground-truth image segmentation results, I find that the output result is a matrix of all zeros, which is visualized as a dark image. How can I solve this problem?

❓ Questions and Help

@zhai-create
Copy link
Author

Has anyone encountered a similar problem? I am eager to solve this problem, thank you.

@zhai-create
Copy link
Author

It seems that in simulator.py, after calling tgt.read_frame_object_id(self.view), self.view becomes all 0. But why is this?

@zhai-create
Copy link
Author

I still haven't figured this out and I need help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant