Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making intuitive output display with pause functionality #150

Open
Hyeyoung346 opened this issue Oct 30, 2024 · 2 comments
Open

Making intuitive output display with pause functionality #150

Hyeyoung346 opened this issue Oct 30, 2024 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@Hyeyoung346
Copy link

Describe the feature you'd like to request

Currently, after entering input, the screen changes, and the output isn’t immediately visible. In ChatGPT/ Claude, I can pause the response to read it as it generates. This way, if the response isn’t what I was looking for, I can quickly adjust my input and try again.

Image

Describe the solution you'd like

The output process could be made more intuitive. It would be ideal if the output appeared directly on the same screen, with a pause feature added to control the response as it displays.

Image

Describe alternatives you've considered

:)

@Hyeyoung346 Hyeyoung346 added the enhancement New feature or request label Oct 30, 2024
@Hyeyoung346 Hyeyoung346 self-assigned this Oct 30, 2024
@Hyeyoung346 Hyeyoung346 moved this to 📐 At design in 🖍 Design team Oct 30, 2024
@Hyeyoung346 Hyeyoung346 moved this from 📐 At design to 🧭 Planning evaluation / ideas in 🖍 Design team Oct 30, 2024
@jancborchardt
Copy link
Member

@julien-nc what do you think regarding feasibility here? It would certainly improve the UX to make it more responsive, but we probably have to check depending on how long the model takes to answer?

@julien-nc
Copy link
Member

Hey. This would be nice indeed. We have already thought about it but:

  • Many models don't support streamed output.
  • Nextcloud can't stream a network response. This would mean the UI polls the backend to get response chunks which is more expensive to do.
  • Our task processing API is not implemented in a way it can give intermediate results. The Assistant is aware of a task result once it has finished.

In short, the flexibility we have in terms of models/providers selection and affecting models to multiple task types limits the flexibility we have on how to run the task.

We can think about it but at a first glance, it is difficult to readjust the tradeoff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: 🧭 Planning evaluation / ideas
Development

No branches or pull requests

3 participants