-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider spans in output #35
Comments
I am not sure how controversial this would be but it would definitely eliminate the need to merge tokens after as the algorithm would extract start and end for each component in a QA fashion |
I thought of these outputs as placeholders. All those scripts are not suitable for production because they would instantiate the model every time they made a prediction, so their utility is somewhat limited. That said, I think I implemented an |
@ivyleavedtoadflax ok that makes sense re outputs. In terms of the instantiation of the model, is it not true that
instantiates the model and then you could do
as many times as you wanted without having to reinstantiate the model? |
Even though unrelated to this issue, I am almost 100% you are right. @ivyleavedtoadflax can confirm. |
Yup exactly right @lizgzil. That's not how I had done it in the |
In the output of
split_parser
,split
andparser
we have an output of tokens and predictions.It may be worth considering a different type of output with the spans of each reference/token rather than the tokens themselves.
The text was updated successfully, but these errors were encountered: