Defines the Cortex class, which provides a clean interface to Agent<->User IO

Hierarchy

  • EventEmitter
    • Cortex

Constructors

Properties

CortexRAM: {
    [k: string]: any;
}
apiBaseUrl: string
function_dictionary: common.apis.cortex.FunctionDictionary
function_input_ch: common.apis.cortex.Channel
functions: common.apis.cortex.Function[]
insights?: any
is_running_function: boolean
last_result: any = null
llmCallFn?: ((args: {
    input: any[];
    model: string;
    schema?: any;
    schema_name?: string;
}) => Promise<any>)
log: any
model: string
name: string
output_structure?: any
processManager?: ProcessManager
prompt_history: any[]
user_output: any
utilities: any
workspace?: {
    [k: string]: any;
}

Methods

  • Convenience method: Send a message and run LLM in one call

    Parameters

    • text: string

      The user's message

    • maxLoops: number = 4

      Maximum function calling loops (default: 4)

    Returns Promise<string>

    Promise - The LLM response

  • Returns {
        cortexRAM: Record<string, any>;
        last_result: any;
        workspace: Record<string, any>;
    }

    • cortexRAM: Record<string, any>
    • last_result: any
    • workspace: Record<string, any>
  • Parameters

    • fetchResponseOrData: any
    • loop: number
    • OptionalfetchTiming: {
          elapsed: number;
          end: number;
          start: number;
      }
      • elapsed: number
      • end: number
      • start: number

    Returns Promise<any>

  • Re-run LLM with a different output format using existing conversation history

    This "time travels" by:

    1. Taking the current message history (minus the last cortex message that triggered this call)
    2. Rebuilding the system prompt with section overrides via PromptManager
    3. Running a structured completion with the custom schema

    By default, preserves all original prompt sections. Use sectionOverrides to:

    • Replace section args: { responseGuidance: ['Custom guidance...'] }
    • Exclude a section entirely: { functions: null }

    Useful when a function needs the LLM to extract structured data from the conversation but the main CortexOutput format isn't granular enough.

    Type Parameters

    • T extends ZodType<any, ZodTypeDef, any>

    Parameters

    Returns Promise<TypeOf<T>>

  • Run the LLM with the specified system message, message history, and parameters If loop=N, after obtaining a function response another LLM call will be made automatically, until no more functions are called or until N calls have been made

    Delegates to the current runner. Kept for backward compatibility.

    Parameters

    • loop: number = 6

    Returns Promise<string>

  • Run a structured completion with a custom Zod schema Allows functions to invoke their own LLM completions with custom output formats

    Type Parameters

    • T extends ZodType<any, ZodTypeDef, any>

    Parameters

    • options: {
          messages: {
              content: string;
              role: "user" | "assistant" | "system";
          }[];
          schema: T;
          schema_name: string;
      }
      • messages: {
            content: string;
            role: "user" | "assistant" | "system";
        }[]
      • schema: T
      • schema_name: string

    Returns Promise<TypeOf<T>>

  • Update usage stats after an LLM call

    Parameters

    • promptTokens: number
    • completionTokens: number
    • cachedInputTokens: number = 0

    Returns void