Defines the Cortex class, which provides a clean interface to Agent<->User IO

Hierarchy

  • EventEmitter
    • Cortex

Constructors

Properties

CortexRAM: {
    [k: string]: any;
}
apiBaseUrl: string
function_input_ch: node.common.apis.cortex.Channel
insights?: any
is_running_function: boolean
last_result: any
log: any
model: string
name: string
prompt_history: any[]
user_output: any
utilities: any
workspace?: {
    [k: string]: any;
}

Methods

  • Add CortexMessage

    Returns void

  • Add UserMessage

    Returns void

  • Parameters

    Returns void

  • Parameters

    • text: string

    Returns void

  • Convenience method: Send a message and run LLM in one call

    Parameters

    • text: string

      The user's message

    • OptionalmaxLoops: number

      Maximum function calling loops (default: 4)

    Returns Promise<string>

    Promise - The LLM response

  • Parameters

    • arg_array: string[]

    Returns any

  • Parameters

    • fn: any

    Returns void

  • Parameters

    • evt: any

    Returns void

  • Parameters

    • id: string

    Returns any

  • Returns Promise<{
        error: any;
        name: string;
        result: any;
    }>

  • Parameters

    • i: any

    Returns Promise<void>

  • Parameters

    • fetchResponse: any
    • loop: number

    Returns Promise<any>

  • Parameters

    • msg: string

    Returns void

  • Re-run LLM with a different output format using existing conversation history

    This "time travels" by:

    1. Taking the current message history (minus the last cortex message that triggered this call)
    2. Rebuilding the system prompt with section overrides via PromptManager
    3. Running a structured completion with the custom schema

    By default, preserves all original prompt sections. Use sectionOverrides to:

    • Replace section args: { responseGuidance: ['Custom guidance...'] }
    • Exclude a section entirely: { functions: null }

    Useful when a function needs the LLM to extract structured data from the conversation but the main CortexOutput format isn't granular enough.

    Type Parameters

    • T extends ZodType<any, ZodTypeDef, any>

    Parameters

    Returns Promise<TypeOf<T>>

  • Reset usage stats

    Returns void

  • Parameters

    • args: any

    Returns any

  • Parameters

    • v: string

    Returns any

  • Run the LLM with the specified system message, message history, and parameters If loop=N, after obtaining a function response another LLM call will be made automatically, until no more functions are called or until N calls have been made

    Parameters

    • Optionalloop: number

    Returns Promise<string>

  • Run a structured completion with a custom Zod schema Allows functions to invoke their own LLM completions with custom output formats

    Type Parameters

    • T extends ZodType<any, ZodTypeDef, any>

    Parameters

    • options: {
          messages: {
              content: string;
              role: "user" | "assistant" | "system";
          }[];
          schema: T;
          schema_name: string;
      }
      • messages: {
            content: string;
            role: "user" | "assistant" | "system";
        }[]
      • schema: T
      • schema_name: string

    Returns Promise<TypeOf<T>>

  • Running function

    Parameters

    • v: boolean

    Returns void

  • Parameters

    • v: any

    Returns Promise<string>

  • Parameters

    • v: any
    • id: string

    Returns Promise<string>