You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When processing a book (English, std voice, CPU-only) on an Intel NUC via docker, got the following error:
`Processing 23.99%: : 889/3701
5;
Antonio de Herrera y Tordesillas, Historia General, 2: 35; Charles Gibson, Spain in America, 141–142;
6;
Joseph de Acosta, The Natural and Moral History of the Indies,
Sentence: 1: 160;
For specific references on depopulation see Antonio Vazquez de Espinosa, Compendium and Description of the West Indies, paragraphs 98, 102, 115, 271, 279, 334, 339, 695, 699, 934,
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Processing 24.02%: : 890/3701
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 542, in convert_chapters_to_audio
if convert_sentence_to_audio(params, session):
File "/home/user/app/lib/functions.py", line 615, in convert_sentence_to_audio
raise DependencyError(e)
lib.functions.DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
convert_ebook() Exception: ❗ XTTS can only generate text with a maximum of 400 tokens.`
Processing stopped and I lost the book...
The text was updated successfully, but these errors were encountered:
When processing a book (English, std voice, CPU-only) on an Intel NUC via docker, got the following error:
`Processing 23.99%: : 889/3701
5;
Antonio de Herrera y Tordesillas, Historia General, 2: 35; Charles Gibson, Spain in America, 141–142;
6;
Joseph de Acosta, The Natural and Moral History of the Indies,
Sentence: 1: 160;
For specific references on depopulation see Antonio Vazquez de Espinosa, Compendium and Description of the West Indies, paragraphs 98, 102, 115, 271, 279, 334, 339, 695, 699, 934,
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Processing 24.02%: : 890/3701
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 596, in convert_sentence_to_audio
params['tts'].tts_to_file(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 366, in tts_to_file
wav = self.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
File "/usr/local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
outputs = self.tts_model.synthesize(
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 479, in full_inference
return self.inference(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 528, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/app/lib/functions.py", line 542, in convert_chapters_to_audio
if convert_sentence_to_audio(params, session):
File "/home/user/app/lib/functions.py", line 615, in convert_sentence_to_audio
raise DependencyError(e)
lib.functions.DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Caught DependencyError: ❗ XTTS can only generate text with a maximum of 400 tokens.
convert_ebook() Exception: ❗ XTTS can only generate text with a maximum of 400 tokens.`
Processing stopped and I lost the book...
The text was updated successfully, but these errors were encountered: