-
Notifications
You must be signed in to change notification settings - Fork 752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wasm2js aborts when allocating pages. #7126
Comments
It looks similar to: #6628 but since he reports downgrading resolved his issue I suspect mine is distinct. |
The minimal testcase here looks like it should have worked in the past - I mean that it is so minimal, that given things used to work in general, it must have. Given that, what I would do is bisect. You should be able to bisect separately over wasm-bindgen and wasm-opt, and only one should be needed (but you might need to try both if the first finds nothing). I see you mentioned you tried wasm-opt 105, which is from 2 years ago, so I would go even further back, until you find where it used to work. Both going back and bisecting should take logarithmic time, so it should be practical. |
@mtb0x1 Edit: async function run() {
try {
console.log(greet("no dice"))
for (let i = 0; i < 10; i++) {
const inputStr = "A".repeat(i+1).repeat(20000)
console.log("iteration: ", i, "input length: ", inputStr.length)
console.log("output length", greet(inputStr).length)
}
} catch (err) {
console.error("Error calling wasm function:", err);
}
} And then the output is: ( I removed the page allocation log lines)
It's very clear that the output doesn't actually change so something is very wrong. If I make the input string twice as long like so
Which is just nonsense indicating memory corruption I think.. |
sounds like memory limit/growth issue, if you test with smaller values it works. as advised by kripken, can you bisect this would helps us pinpoint the issue. |
If bisection doesn't work, another option is to debug this in depth using Binaryen's instrumentation, comparing wasm and wasm2js builds. Specifically, we can instrument the builds so that every single read and write to memory, locals, etc. are logged out. Assuming everything is deterministic, and that the wasm build works while wasm2js errors, then comparing the logs will find the first divergence, and pinpoint the bug. To do this, the process is something like
|
I'm running into some pretty serious issues using wasm2js. It looks to me like the entire wasm memory is getting corrupted. Sometimes they result in clean-ish aborts like this:
➜ node index.js allocating pages: 4 allocating pages: 4 allocating pages: 7 first done allocating pages: 13 Error calling wasm function: Error: abort at wasm2js_trap (file:///home/rmstorm/Documents/rust/wasm2js-memory-problem/pkg/wasm2js_memory_problem_bg.wasm.js:25:33) at __rust_start_panic (file:///home/rmstorm/Documents/rust/wasm2js-memory-problem/pkg/wasm2js_memory_problem_bg.wasm.js:4319:3) at rust_panic (file:///home/rmstorm/Documents/rust/wasm2js-memory-problem/pkg/wasm2js_memory_problem_bg.wasm.js:4218:3) at std__panicking__rust_panic_with_hook__he5c089ac7305193e
But sometimes they result in just complete corruption and I get output like:
I%@�<#�f�[email protected]�n~I%@��r�f�M@q��$~I%@�><�f�M@�a�}
I've made a repo with a minimal reproducible example. The wasm file that I'm using is generated using wasm-bindgen. I have tested with versions
0.2.90
,0.2.91
and0.2.92
I have also tested with wasm2js version 105 (the one I originally had installed) and the latest version (119). It occurs in all situations!The text was updated successfully, but these errors were encountered: