Agents
Reasoning Assistants
There are a lot of options for a ‘reasoning assistant’ architecture. I’m looking at a very specific kind, one that is tightly couple to a human, always on. Kind of like using a flashlight in the dark. The reasoning agent sheds light on what it can for the person using it.
Tight Coupling Experiments
Here’s the things I’ve tried:
- human_llm, this is ‘wrapping’ a model with a human, with the human responsible for the response. The approach I like best so far is to have the HumanLLM take two parameters: the LLM it is wrapping, and a LangSmith trace of previous runs. In use, this means that any ‘llm.invoke’ passes first to a human who has two options:
- invoke the llm as before
- select a result from the LangSmith run provided
- adding human control points as nodes in my LangGraph
Right now I’m only using command line for the human input. Have not done this with web interface.