Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX]Agents stop outputting "Generating a well-informed response" #1035

Open
12 tasks
MartinMayday opened this issue Jan 7, 2025 · 3 comments
Open
12 tasks
Labels
fix Fix something that isn't working as expected

Comments

@MartinMayday
Copy link

MartinMayday commented Jan 7, 2025

Describe the bug

A clear and concise description of what the bug is. Please include what you were expecting to happen vs. what actually happened.
I want to continue a conversation from some days ago. but agent doesn't actually output "Generating a well-informed response"
I've tried:

  • reloading browser,
  • used another browser,
  • /notes please summarise last session

To Reproduce

Steps to reproduce the behavior:
Find conversation in sidebar
continue conversations (user)
assistant *thinks', never outputs "Generating a well-informed response"

Screenshots

Screenshot 2025-01-07 at 12 46 37

Platform

  • Server:
    • Cloud-Hosted (https://app.khoj.dev)
    • Self-Hosted Docker
    • Self-Hosted Python package
    • Self-Hosted source code
  • Client:
    • Obsidian
    • Emacs
    • [ x] Desktop app
    • Web browser
    • WhatsApp
  • OS:
    • Windows
    • [ x] macOS
    • Linux
    • Android
    • iOS

If self-hosted

  • Server Version [e.g. 1.0.1]:

Additional context

is there a way to explore logs of conversations?
is there a way to see how many MB sync files are uploaded? - obsidian keeps notifying something like "sync failed due to no more space". Do not know if its related to the issue above.

@MartinMayday MartinMayday added the fix Fix something that isn't working as expected label Jan 7, 2025
@ProjectMoon
Copy link

I am seeing this happen with ollama due to a timeout on the Khoj side. ollama logs report connection aborted. But everything in the train of thought seems to work up until the last point. Maybe the prompt is so big and the client (Khoj) is not waiting long enough?

@ProjectMoon
Copy link

ProjectMoon commented Jan 7, 2025

I notice this issue went away immediately after I updated the model in ollama and the database to have a large context size, so it would stop truncating.

Edit: still get some timeout issues though, on subsequent messages. Also some Body already consumed errors in the browser console.

@ProjectMoon
Copy link

ProjectMoon commented Jan 7, 2025

Also, for me, this seems to be specific to research mode enabled.

Edit: after further testing, this also seems to often be connected to going over the ollama context limit. The Khoj logs say it's truncating to the max prompt limit, but then there's a timeout anyway.

So basically, Khoj doesn't seem to be giving the OpenAI client enough time to ingest the prompt when it's huge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix Fix something that isn't working as expected
Projects
None yet
Development

No branches or pull requests

2 participants