You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
May I ask how may predictions the code needs to get one black box ANN(artificial neural network) or MLP in your paper? Suppose there are 200 neurons per layer, the total number of parameters is a lot.
Looking forward to your reply!
Thanks a lot!
The text was updated successfully, but these errors were encountered:
This really depends on the total number of parameters in the network, and whether you assume access to the exact class probabilities or just the predicted class labels.. In some of our experiments, we achieved high extraction accuracy (>98%) with 10 times less predictions than there were parameters in the network.
My guess is that if you use a much smaller number of predictions, you can still get a non-trivial extraction accuracy (maybe around 80%) but we haven't tried that out on deep networks.
Thanks, Florian! Actually 90%+ accuracy is surprisingly good enough without the training data.
Just want to make sure that I understand you correctly :D,
What's the max hidden layer for ANN you tested? 3, 4?
As far as I know, to make the training job easy, ANNs usually use local and the same weight for adjacent inputs(convolutional network), so the unknown parameters are actually not so many. But if the extracted model(by reverse engineering) is not exactly the same original one, would there be a problem that the model will behave differently in some circumstances that were not tested?
Dear Florian,
May I ask how may predictions the code needs to get one black box ANN(artificial neural network) or MLP in your paper? Suppose there are 200 neurons per layer, the total number of parameters is a lot.
Looking forward to your reply!
Thanks a lot!
The text was updated successfully, but these errors were encountered: