How to view and update past graph state¶
Once you start checkpointing your graphs, you can easily get or update the state of the agent at any point in time. This permits a few things:
- You can surface a state during an interrupt to a user to let them accept an action.
- You can rewind the graph to reproduce or avoid issues.
- You can modify the state to embed your agent into a larger system, or to let the user better control its actions.
The key methods used for this functionality are:
- getState: fetch the values from the target config
- updateState: apply the given values to the target state
Note: this requires passing in a checkpointer.
This works for StateGraph
Below is an example.
var userHomeDir = System.getProperty("user.home");
var localRespoUrl = "file://" + userHomeDir + "/.m2/repository/";
var langchain4jVersion = "1.0.1";
var langchain4jBeta = "1.0.1-beta6";
var langgraph4jVersion = "1.6-SNAPSHOT";
Remove installed package from Jupiter cache
%%bash
rm -rf \{userHomeDir}/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/bsc/langgraph4j
%dependency /add-repo local \{localRespoUrl} release|never snapshot|always
// %dependency /list-repos
%dependency /add org.slf4j:slf4j-jdk14:2.0.9
%dependency /add org.bsc.langgraph4j:langgraph4j-core:\{langgraph4jVersion}
%dependency /add org.bsc.langgraph4j:langgraph4j-langchain4j:\{langgraph4jVersion}
%dependency /add dev.langchain4j:langchain4j:\{langchain4jVersion}
%dependency /add dev.langchain4j:langchain4j-open-ai:\{langchain4jVersion}
%dependency /add dev.langchain4j:langchain4j-ollama:\{langchain4jBeta}
%dependency /resolve
Repository local url: file:///Users/bsorrentino/.m2/repository/ added. Adding dependency org.slf4j:slf4j-jdk14:2.0.9 Adding dependency org.bsc.langgraph4j:langgraph4j-core:1.6-SNAPSHOT Adding dependency org.bsc.langgraph4j:langgraph4j-langchain4j:1.6-SNAPSHOT Adding dependency dev.langchain4j:langchain4j:1.0.1 Adding dependency dev.langchain4j:langchain4j-open-ai:1.0.1 Adding dependency dev.langchain4j:langchain4j-ollama:1.0.1-beta6 Solving dependencies Resolved artifacts count: 17 Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/slf4j/slf4j-jdk14/2.0.9/slf4j-jdk14-2.0.9.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/slf4j/slf4j-api/2.0.9/slf4j-api-2.0.9.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/bsc/langgraph4j/langgraph4j-core/1.6-SNAPSHOT/langgraph4j-core-1.6-SNAPSHOT.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/bsc/async/async-generator/3.2.2/async-generator-3.2.2.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/bsc/langgraph4j/langgraph4j-langchain4j/1.6-SNAPSHOT/langgraph4j-langchain4j-1.6-SNAPSHOT.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j/1.0.1/langchain4j-1.0.1.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j-core/1.0.1/langchain4j-core-1.0.1.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/jspecify/jspecify/1.0.0/jspecify-1.0.0.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/com/fasterxml/jackson/core/jackson-annotations/2.19.0/jackson-annotations-2.19.0.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/com/fasterxml/jackson/core/jackson-core/2.19.0/jackson-core-2.19.0.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/com/fasterxml/jackson/core/jackson-databind/2.19.0/jackson-databind-2.19.0.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/org/apache/opennlp/opennlp-tools/2.5.4/opennlp-tools-2.5.4.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j-open-ai/1.0.1/langchain4j-open-ai-1.0.1.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j-http-client/1.0.1/langchain4j-http-client-1.0.1.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j-http-client-jdk/1.0.1/langchain4j-http-client-jdk-1.0.1.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/com/knuddels/jtokkit/1.1.0/jtokkit-1.1.0.jar Add to classpath: /Users/bsorrentino/Library/Jupyter/kernels/rapaio-jupyter-kernel/mima_cache/dev/langchain4j/langchain4j-ollama/1.0.1-beta6/langchain4j-ollama-1.0.1-beta6.jar
Initialize logger
try( var file = new java.io.FileInputStream("./logging.properties")) {
java.util.logging.LogManager.getLogManager().readConfiguration( file );
}
var log = org.slf4j.LoggerFactory.getLogger("time-travel");
Define the state¶
State is an (immutable) data class, inheriting from prebuilt MessagesState, shared with all nodes in our graph. A state is basically a wrapper of a Map<String,Object>
that provides some enhancers:
- Schema (optional), that is a
Map<String,Channel>
where eachChannel
describe behaviour of the related property value()
accessors that inspect Map an return an Optional of value contained and cast to the required type
import org.bsc.langgraph4j.prebuilt.MessagesState;
import org.bsc.langgraph4j.state.Channel;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.ChatMessage;
public class State extends MessagesState<ChatMessage> {
public State(Map<String, Object> initData) {
super( initData );
}
}
Set up the tools¶
Using langchain4j, We will first define the tools we want to use. For this simple example, we will use create a placeholder search engine. However, it is really easy to create your own tools - see documentation here on how to do that.
import dev.langchain4j.agent.tool.P;
import dev.langchain4j.agent.tool.Tool;
import java.util.Optional;
import static java.lang.String.format;
public class SearchTool {
@Tool("Use to surf the web, fetch current information, check the weather, and retrieve other information.")
String execQuery(@P("The query to use in your search.") String query) {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 degrees";
}
}
Set up the model¶
Now we will load the chat model.
- It should work with messages. We will represent all agent state in the form of messages, so it needs to be able to work well with them.
- It should work with tool calling,meaning it can return function arguments in its response.
Note:
These model requirements are not general requirements for using LangGraph4j - they are just requirements for this one example.
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.agent.tool.ToolSpecification;
import dev.langchain4j.agent.tool.ToolSpecifications;
var model = OllamaChatModel.builder()
.modelName( "qwen2.5:7b" )
.baseUrl("http://localhost:11434")
.logResponses(true)
.maxRetries(2)
.temperature(0.0)
.build();
Test function calling¶
import dev.langchain4j.agent.tool.ToolSpecification;
import dev.langchain4j.agent.tool.ToolSpecifications;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.tool.DefaultToolExecutor;
import org.bsc.langgraph4j.langchain4j.tool.LC4jToolService;
import org.bsc.langgraph4j.langchain4j.serializer.std.LC4jStateSerializer;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.request.ChatRequestParameters;
var toolService = LC4jToolService.builder()
.toolsFromObject( new SearchTool() )
.build();
var tools = toolService.toolSpecifications();
UserMessage userMessage = UserMessage.from("What will the weather be like in London tomorrow?");
var params = ChatRequestParameters.builder()
.toolSpecifications( tools )
.build();
var request = ChatRequest.builder()
.parameters( params )
.messages( userMessage )
.build();
var response = model.chat( request );
var result = toolService.execute( response.aiMessage().toolExecutionRequests() );
result;
execute: execQuery
Optional[ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" }]
Define the graph¶
We can now put it all together. We will run it first without a checkpointer:
import static org.bsc.langgraph4j.StateGraph.START;
import static org.bsc.langgraph4j.StateGraph.END;
import static org.bsc.langgraph4j.action.AsyncEdgeAction.edge_async;
import static org.bsc.langgraph4j.action.AsyncNodeAction.node_async;
import org.bsc.langgraph4j.StateGraph;
import org.bsc.langgraph4j.action.EdgeAction;
import org.bsc.langgraph4j.action.NodeAction;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.ChatMessage;
import dev.langchain4j.service.tool.DefaultToolExecutor;
import org.bsc.langgraph4j.checkpoint.MemorySaver;
import org.bsc.langgraph4j.CompileConfig;
import java.util.stream.Collectors;
// Route Message
EdgeAction<State> routeMessage = state -> {
var lastMessage = state.lastMessage();
if ( !lastMessage.isPresent()) return "exit";
if( lastMessage.get() instanceof AiMessage message ) {
// If tools should be called
if ( message.hasToolExecutionRequests() ) return "next";
}
// If no tools are called, we can finish (respond to the user)
return "exit";
};
// Call Model
NodeAction<State> callModel = state -> {
var tools = ToolSpecifications.toolSpecificationsFrom( SearchTool.class );
var params = ChatRequestParameters.builder()
.toolSpecifications( tools )
.build();
var request = ChatRequest.builder()
.parameters( params )
.messages( state.messages() )
.build();
var response = model.chat( request );
return Map.of( "messages", response.aiMessage() );
};
final var toolService = LC4jToolService.builder()
.toolsFromObject(new SearchTool())
.build();
// Invoke Tool
NodeAction<State> invokeTool = state -> {
var lastMessage = (AiMessage)state.lastMessage()
.orElseThrow( () -> ( new IllegalStateException( "last message not found!")) );
var result = toolService.execute( lastMessage.toolExecutionRequests() )
.orElseThrow( () -> ( new IllegalStateException( "tool execution failed!")));
return Map.of( "messages", result );
};
var stateSerializer = new LC4jStateSerializer<>(State::new);
// Define Graph
var workflow = new StateGraph<State>(State.SCHEMA, stateSerializer)
.addNode("agent", node_async(callModel) )
.addNode("tools", node_async(invokeTool) )
.addEdge(START, "agent")
.addConditionalEdges("agent", edge_async(routeMessage), Map.of( "next", "tools", "exit", END ))
.addEdge("tools", "agent");
// Here we only save in-memory
var memory = new MemorySaver();
var compileConfig = CompileConfig.builder()
.checkpointSaver(memory)
.releaseThread(false) // DON'T release thread after completion
.build();
var graph = workflow.compile(compileConfig);
Interacting with the Agent¶
We can now interact with the agent. Between interactions you can get and update state.
import org.bsc.langgraph4j.RunnableConfig;
var runnableConfig = RunnableConfig.builder()
.threadId("conversation-num-1" )
.build();
Map<String,Object> inputs = Map.of( "messages", UserMessage.from("Hi I'm Bartolo.") );
var result = graph.stream( inputs, runnableConfig );
for( var r : result ) {
System.out.println( r.node() );
System.out.println( r.state() );
}
START
__START__ { messages=[ UserMessage { name = null contents = [TextContent { text = "Hi I'm Bartolo." }] } ] } agent { messages=[ UserMessage { name = null contents = [TextContent { text = "Hi I'm Bartolo." }] } AiMessage { text = "Hello Bartolo! Nice to meet you. How can I assist you today?" toolExecutionRequests = [] } ] } __END__ { messages=[ UserMessage { name = null contents = [TextContent { text = "Hi I'm Bartolo." }] } AiMessage { text = "Hello Bartolo! Nice to meet you. How can I assist you today?" toolExecutionRequests = [] } ] }
Here you can see the "agent
" node ran, and then our edge returned __END__
so the graph stopped execution there.
Let's check the current graph state.
import org.bsc.langgraph4j.checkpoint.Checkpoint;
var checkpoint = graph.getState(runnableConfig);
System.out.println(checkpoint);
StateSnapshot{node=agent, state={ messages=[ UserMessage { name = null contents = [TextContent { text = "Hi I'm Bartolo." }] } AiMessage { text = "Hello Bartolo! Nice to meet you. How can I assist you today?" toolExecutionRequests = [] } ] }, config=RunnableConfig{ threadId=conversation-num-1, checkPointId=18dbbc9f-a203-45d4-9e55-8f0f6622c2ac, nextNode=__END__, streamMode=VALUES }}
The current state is the two messages we've seen above, 1. the Human Message we sent in, 2. the AIMessage we got back from the model.
The next value is __END__
since the graph has terminated.
checkpoint.getNext()
__END__
Let's get it to execute a tool¶
When we call the graph again, it will create a checkpoint after each internal execution step. Let's get it to run a tool, then look at the checkpoint.
Map<String,Object> inputs = Map.of( "messages", UserMessage.from("What's the weather like in SF currently?") );
var state = graph.invoke( inputs, runnableConfig ).orElseThrow( () ->(new IllegalStateException()) ) ;
System.out.println( state.lastMessage().orElse(null) );
START ToolExecutionRequest id is null! ToolExecutionRequest id is null! ToolExecutionRequest id is null! execute: execQuery ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null!
AiMessage { text = "The current weather in San Francisco is quite cold, with temperatures at around 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] }
Pause before tools¶
If you notice below, we now will add interruptBefore=["action"] - this means that before any actions are taken we pause. This is a great moment to allow the user to correct and update the state! This is very useful when you want to have a human-in-the-loop to validate (and potentially change) the action to take.
var memory = new MemorySaver();
var compileConfig = CompileConfig.builder()
.checkpointSaver(memory)
.releaseThread(false) // DON'T release thread after completion
.interruptBefore( "tools")
.build();
var graphWithInterrupt = workflow.compile(compileConfig);
var runnableConfig = RunnableConfig.builder()
.threadId("conversation-2" )
.build();
Map<String,Object> inputs = Map.of( "messages", UserMessage.from("What's the weather like in SF currently?") );
var result = graphWithInterrupt.stream( inputs, runnableConfig );
for( var r : result ) {
System.out.println( r.node() );
System.out.println( r.state() );
}
START ToolExecutionRequest id is null! ToolExecutionRequest id is null!
__START__ { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } ] }
ToolExecutionRequest id is null!
agent { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ] }
Get State¶
You can fetch the latest graph checkpoint using getState(config)
.
var snapshot = graphWithInterrupt.getState(runnableConfig);
snapshot.getNext();
tools
Resume¶
You can resume by running the graph with a null input. The checkpoint is loaded, and with no new inputs, it will execute as if no interrupt had occurred.
import org.bsc.langgraph4j.GraphInput;
var result = graphWithInterrupt.stream( GraphInput.resume(), snapshot.getConfig() );
for( var r : result ) {
log.trace( "RESULT:\n{}\n{}", r.node(), r.state() );
}
RESUME REQUEST RESUME FROM agent ToolExecutionRequest id is null! execute: execQuery ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! RESULT: tools { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } ] } ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! RESULT: agent { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } AiMessage { text = "The current weather in San Francisco is cold, with temperatures currently at 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] } ] } RESULT: __END__ { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } AiMessage { text = "The current weather in San Francisco is cold, with temperatures currently at 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] } ] }
Check full history¶
Let's browse the history of this thread, from newest to oldest.
RunnableConfig toReplay = null;
var states = graphWithInterrupt.getStateHistory(runnableConfig);
for( var state: states ) {
log.trace( "\n---\n{}\n---",state);
if (state.state().messages().size() == 3) {
toReplay = state.getConfig();
}
}
if (toReplay==null) {
throw new IllegalStateException("No state to replay");
}
--- StateSnapshot{node=agent, state={ messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } AiMessage { text = "The current weather in San Francisco is cold, with temperatures currently at 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] } ] }, config=RunnableConfig{ threadId=conversation-2, checkPointId=1ee4643d-e2ba-4143-8685-9d284e24cb0e, nextNode=__END__, streamMode=VALUES }} --- --- StateSnapshot{node=tools, state={ messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } ] }, config=RunnableConfig{ threadId=conversation-2, checkPointId=d5fefcc9-a64c-4902-a653-851a2b4f77f0, nextNode=agent, streamMode=VALUES }} --- --- StateSnapshot{node=agent, state={ messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ] }, config=RunnableConfig{ threadId=conversation-2, checkPointId=b49955fc-5c59-47ec-9087-e4f3d62c91fd, nextNode=tools, streamMode=VALUES }} --- --- StateSnapshot{node=__START__, state={ messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } ] }, config=RunnableConfig{ threadId=conversation-2, checkPointId=6d6d7e39-76af-4c16-804d-721f71615a9f, nextNode=agent, streamMode=VALUES }} ---
Replay a past state¶
To replay from this place we just need to pass its config back to the agent.
var results = graphWithInterrupt.stream( GraphInput.resume(), toReplay );
for( var r : results ) {
log.trace( "RESULT:\n{}\n{}\n---", r.node(), r.state() );
}
RESUME REQUEST RESUME FROM tools ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! ToolExecutionRequest id is null! ToolExecutionResultMessage id is null! RESULT: agent { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } AiMessage { text = "The current weather in San Francisco is cold, with temperatures currently at 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] } ] } --- RESULT: __END__ { messages=[ UserMessage { name = null contents = [TextContent { text = "What's the weather like in SF currently?" }] } AiMessage { text = null toolExecutionRequests = [ToolExecutionRequest { id = null, name = "execQuery", arguments = "{ "query" : "current weather in San Francisco" }" }] } ToolExecutionResultMessage { id = null toolName = "execQuery" text = "Cold, with a low of 13 degrees" } AiMessage { text = "The current weather in San Francisco is cold, with temperatures currently at 13 degrees. Please make sure to dress warmly!" toolExecutionRequests = [] } ] } ---