Replies: 1 comment 1 reply
-
It is recommended to set cache through langchain, because LangChainLLMs cannot keep up with the update of langchain. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I understood how we store query and LLM responses in GPTCache. I want to cache retrieved documents along this. I saw similar example here https://gptcache.readthedocs.io/en/latest/bootcamp/langchain/question_answering.html
llm = LangChainLLMs(llm=OpenAI(temperature=0))
chain = load_qa_chain(llm, chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain.run(input_documents=docs, question=query)
But I had to perform similarity search only for query and able to get answer along with documents in response. Is that possible with GPTCache?
Beta Was this translation helpful? Give feedback.
All reactions