OptionalinsightsOptionalworkspaceAdd CortexMessage
Add UserMessage
Get current context status including token usage breakdown
Get current usage stats
Re-run LLM with a different output format using existing conversation history
This "time travels" by:
By default, preserves all original prompt sections. Use sectionOverrides to:
Useful when a function needs the LLM to extract structured data from the conversation but the main CortexOutput format isn't granular enough.
OptionalsectionExecutes JavaScript code in sandbox instead of running function calls
Run the LLM with the specified system message, message history, and parameters If loop=N, after obtaining a function response another LLM call will be made automatically, until no more functions are called or until N calls have been made
Defines the Cortex class, which provides a clean interface to Agent<->User IO