Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: XTTS can only generate text with a maximum of 400 tokens. #151

Closed
citadella opened this issue Dec 29, 2024 · 1 comment
Closed

Comments

@citadella
Copy link

When processing a book (English, std voice, CPU-only) on an Intel NUC via docker, got the following error:

`Processing 23.99%: : 889/3701
5;
Antonio de Herrera y Tordesillas, Historia General, 2: 35; Charles Gibson, Spain in America, 141–142;

6;
Joseph de Acosta, The Natural and Moral History of the Indies,
Sentence: 1: 160;
For specific references on depopulation see Antonio Vazquez de Espinosa, Compendium and Description of the West Indies, paragraphs 98, 102, 115, 271, 279, 334, 339, 695, 699, 934,
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Processing 24.02%: : 890/3701
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 542, in convert_chapters_to_audio
if convert_sentence_to_audio(params, session):
File "/home/user/app/lib/functions.py", line 615, in convert_sentence_to_audio
raise DependencyError(e)
lib.functions.DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
convert_ebook() Exception: ❗ XTTS can only generate text with a maximum of 400 tokens.`

Processing stopped and I lost the book...

@ROBERT-MCDOWELL
Copy link
Collaborator

ROBERT-MCDOWELL commented Dec 29, 2024

duplicate of #140
it's fixed on the next git update and release v2.1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants