Complete API reference for the core runtime package. Contains the agent loop, tool execution engine, display manager, context management, model adapters, and all foundational types.
The top-level builder and runtime entry point. Use the builder pattern to register tools and subscribers, then call build() to produce a runnable agent.
import { Glove } from "glove-core";
import { z } from "zod";
const agent = new Glove({
store,
model,
displayManager,
systemPrompt: "You are a helpful assistant.",
compaction_config: {
compaction_instructions: "Summarize the conversation.",
max_turns: 30,
},
})
.fold({
name: "get_weather",
description: "Get weather for a city.",
inputSchema: z.object({ city: z.string() }),
async do(input) {
const res = await fetch(`https://api.weather.example/v1?city=${input.city}`);
return res.json();
},
})
.build();
const result = await agent.processRequest("What is the weather in Tokyo?");new Glove(config: GloveConfig)
| Property | Type | Description |
|---|---|---|
| store | StoreAdapter | The store adapter for conversation persistence. Required. |
| model | ModelAdapter | The model adapter for language model communication. Required. |
| displayManager | DisplayManagerAdapter | The display manager adapter for UI slot management. Required. |
| systemPrompt | string | The system prompt sent with every model request. Required. |
| maxRetries? | number | Maximum number of retries for failed tool executions. Passed to the Executor. |
| compaction_config | CompactionConfig | Configuration for automatic context window compaction. Required. |
| Method | Returns | Description |
|---|---|---|
| fold<I>(args: GloveFoldArgs<I>) | IGloveBuilder | Register a tool with the agent. Returns the builder for chaining. |
| addSubscriber(subscriber: SubscriberAdapter) | IGloveBuilder | Add a subscriber that receives streaming events. Returns the builder for chaining. |
| build() | IGloveRunnable | Finalize configuration and return a runnable agent instance. |
| processRequest(request, signal?) | Promise<ModelPromptResult | Message> | Send a request string or ContentPart[] to the agent and receive the result. Available after build(). |
| setModel(model: ModelAdapter) | void | Replace the model adapter at runtime. Useful for model switching mid-session. |
| Property | Type | Description |
|---|---|---|
| displayManager | DisplayManagerAdapter | Read-only access to the display manager instance. |
| Property | Type | Description |
|---|---|---|
| name | string | Unique name for the tool. |
| description | string | Description of what the tool does. The model reads this to decide when to invoke it. |
| inputSchema | z.ZodType<I> | Zod schema defining the tool's input shape. |
| requiresPermission? | boolean | When true, checks the store for permission before execution. Defaults to false. |
| unAbortable? | boolean | When true, the tool runs to completion even if the abort signal fires (e.g. from voice barge-in). Use for mutation-critical tools. Defaults to false. |
| do | (input: I, display: DisplayManagerAdapter) => Promise<ToolResultData> | The tool's implementation. Receives validated input and the display manager. Return value becomes the tool result. |
The interface returned by build(). Represents a fully configured, ready-to-run agent.
| Member | Type | Description |
|---|---|---|
| processRequest(request, signal?) | (request: string | ContentPart[], signal?: AbortSignal) => Promise<ModelPromptResult | Message> | Send a user request to the agent and get the response. |
| setModel(model) | (model: ModelAdapter) => void | Swap the model adapter at runtime. |
| displayManager | DisplayManagerAdapter | Read-only reference to the display manager. |
Manages the display stack: a ordered collection of UI slots that tools push and users resolve. Implements DisplayManagerAdapter.
import { Displaymanager } from "glove-core/display-manager";
const dm = new Displaymanager();
dm.subscribe((stack) => {
console.log("Display stack changed:", stack);
});| Method | Returns | Description |
|---|---|---|
| registerRenderer<I,O>(renderer: Renderer<I,O>) | void | Register a named renderer with input/output schemas. |
| pushAndForget<I>(slot: { renderer?: string; input: I }) | Promise<string> | Push a slot onto the stack without blocking. Returns the slot ID. |
| pushAndWait<I,O>(slot: { renderer?: string; input: I }) | Promise<O> | Push a slot and block until resolved or rejected. Returns the resolved value. |
| subscribe(listener: ListenerFn) | UnsubscribeFn | Subscribe to stack changes. The listener is called with the current stack whenever it changes. Returns an unsubscribe function. |
| notify() | Promise<void> | Manually trigger all subscribed listeners with the current stack state. |
| resolve<O>(slot_id: string, value: O) | void | Resolve a pushAndWait slot by ID, unblocking the waiting tool. |
| reject(slot_id: string, error: string) | void | Reject a pushAndWait slot by ID, causing the pushAndWait promise to throw. |
| removeSlot(id: string) | void | Remove a slot from the stack by ID. |
| clearStack() | Promise<void> | Remove all slots from the display stack and notify listeners. |
The interface that DisplayManager implements. Any custom display manager must conform to this shape.
| Member | Type | Description |
|---|---|---|
| renderers | Array<Renderer<unknown, unknown>> | Registry of named renderers. |
| stack | Slot<unknown>[] | The current display stack, ordered from bottom to top. |
| listeners | Set<ListenerFn> | Set of subscribed listener functions. |
| resolverStore | Map<string, { resolve: ResolverFn<unknown>; reject: RejectFn }> | Internal map of pending pushAndWait resolvers keyed by slot ID. |
| registerRenderer(renderer) | void | Register a renderer. |
| pushAndForget(slot) | Promise<string> | Push without blocking. |
| pushAndWait(slot) | Promise<unknown> | Push and block until resolved. |
| notify() | Promise<void> | Trigger listeners. |
| subscribe(listener) | UnsubscribeFn | Subscribe to changes. |
| resolve(slot_id, value) | void | Resolve a pending slot. |
| reject(slot_id, error: any) | void | Reject a pending slot. |
| removeSlot(id) | void | Remove a slot by ID. |
| clearStack() | Promise<void> | Clear all slots. |
Represents a single entry on the display stack. Pushed by tools, rendered by the UI layer.
| Property | Type | Description |
|---|---|---|
| id | string | Unique identifier for this slot instance. |
| renderer | string | Name of the renderer to use for displaying this slot. |
| input | I | Input data passed to the renderer. Shape depends on the tool that created the slot. |
A named renderer definition registered with the display manager.
| Property | Type | Description |
|---|---|---|
| name | string | Unique name identifying this renderer. |
| inputSchema | z.ZodType<I> | Zod schema for validating the input data. |
| outputSchema? | z.ZodType<O> | Optional Zod schema for validating the resolved output. |
Wraps a StoreAdapter and provides a simplified interface for reading and writing conversation data, messages, and tasks.
import { Context } from "glove-core";
const ctx = new Context(store);
const messages = await ctx.getMessages();
await ctx.appendMessages([{ sender: "user", text: "Hello" }]);new Context(store: StoreAdapter)
| Method | Returns | Description |
|---|---|---|
| getMessages() | Promise<Message[]> | Retrieve messages for the model. Applies splitAtLastCompaction internally: finds the last message with is_compaction set to true and returns only messages from that point onward. This means the model sees the compaction summary plus any subsequent messages, not the full raw history. To access the complete unfiltered history, use the store's getMessages() directly. |
| appendMessages(msgs: Message[]) | Promise<void> | Append messages to the conversation history. |
| getTasks() | Promise<Task[]> | Retrieve all tasks from the store. Requires store to implement getTasks. |
| addTasks(tasks: Task[]) | Promise<void> | Add tasks to the store. Requires store to implement addTasks. |
| updateTask(taskId: string, updates: Partial<Task>) | Promise<void> | Update a task by ID. Requires store to implement updateTask. |
Manages model prompting: sends messages and tool definitions to the model adapter and collects the response. Notifies subscribers of streaming events.
import { PromptMachine } from "glove-core";
const pm = new PromptMachine(model, ctx, "You are a helpful assistant.");
pm.addSubscriber(subscriber);
const result = await pm.run(messages, tools);new PromptMachine(model: ModelAdapter, ctx: Context, systemPrompt: string)
| Method | Returns | Description |
|---|---|---|
| addSubscriber(subscriber: SubscriberAdapter) | void | Add a subscriber to receive model events (text_delta, tool_use, model_response_complete). |
| run(messages: Message[], tools?: Tool<unknown>[], signal?: AbortSignal) | Promise<ModelPromptResult> | Prompt the model with messages and optional tools. Returns the model's response including token counts. |
The tool execution engine. Maintains a registry of tools and a call stack. Executes tool calls from the model, validates inputs, handles errors, and returns results.
import { Executor } from "glove-core";
const executor = new Executor(3, store);
executor.registerTool(myTool);
executor.addSubscriber(subscriber);
executor.addToolCallToStack({ tool_name: "get_weather", input_args: { city: "Tokyo" } });
const results = await executor.executeToolStack();new Executor(MAX_RETRIES?: number, store?: StoreAdapter)
| Property | Type | Description |
|---|---|---|
| tools | Tool[] | Array of registered tools. |
| MAX_RETRIES | number | Maximum retry attempts for failed tool calls. |
| Method | Returns | Description |
|---|---|---|
| registerTool(tool: Tool<unknown>) | void | Add a tool to the executor's registry. |
| addSubscriber(subscriber: SubscriberAdapter) | void | Add a subscriber to receive tool execution events (tool_use, tool_use_result). |
| addToolCallToStack(call: ToolCall) | void | Queue a tool call for execution. |
| executeToolStack(handOver?: HandOverFunction, signal?: AbortSignal) | Promise<ToolResult[]> | Execute all queued tool calls and return their results. Clears the stack after execution. |
Monitors the context window size and triggers compaction when limits are exceeded. Tracks turn counts and token consumption. Compaction is history-preserving: messages are never deleted from the store. Instead, a compaction summary is appended with is_compaction: true, and resetCounters() resets the token and turn counts. The model only sees post-compaction messages because Context.getMessages() applies splitAtLastCompaction() internally.
import { Observer } from "glove-core";
const observer = new Observer(
store,
ctx,
promptMachine,
"Summarize the conversation so far.",
30, // max turns
100000 // context compaction token limit
);
await observer.turnComplete();
await observer.tryCompaction();new Observer(store: StoreAdapter, ctx: Context, prompt: PromptMachine, compaction_instructions: string, max_turns?: number, context_compaction_limit?: number)
| Property | Type | Description |
|---|---|---|
| MAX_TURNS | number | Maximum turns before compaction is considered. |
| CONTEXT_COMPACTION_LIMIT | number | Maximum token count before compaction is triggered. |
| Method | Returns | Description |
|---|---|---|
| setCompactionInstructions(instruction: string) | void | Update the compaction instructions at runtime. |
| setMaxTurns(new_max: number) | void | Update the maximum turn threshold. |
| setContextCompactionLimit(new_limit: number) | void | Update the token consumption threshold. |
| turnComplete() | Promise<void> | Notify the observer that a turn has completed. Increments the turn counter in the store. |
| getCurrentTurns() | Promise<number> | Get the current turn count from the store. |
| addTokensConsumed(count: number) | Promise<void> | Add to the cumulative token count in the store. |
| getCurrentTokenConsumption() | Promise<number> | Get the current total token consumption from the store. |
| tryCompaction() | Promise<void> | Check if compaction is needed (turns or tokens exceeded) and perform it if so. Summarizes the conversation and appends the summary as a new message with is_compaction set to true. Calls resetCounters() to reset token and turn counts without deleting messages. The full message history is preserved in the store for frontend display, while Context.getMessages() uses splitAtLastCompaction to ensure the model only sees messages from the latest compaction onward. |
Orchestrates the core agent loop: prompt the model, check for tool calls, execute tools, feed results back, repeat until the model responds with text only.
import { Agent } from "glove-core";
const agent = new Agent(store, executor, context, observer, promptMachine);
const result = await agent.ask(userMessage);new Agent(store: StoreAdapter, executor: Executor, context: Context, observer: Observer, prompt_machine: PromptMachine)
| Method | Returns | Description |
|---|---|---|
| ask(message: Message, handOver?: HandOverFunction, signal?: AbortSignal) | Promise<ModelPromptResult> | Run the full agent loop for a user message. Prompts the model, executes any tool calls, loops until the model produces a final text response. Returns the final result with token counts. |
Custom error class thrown when an agent request is aborted via an AbortSignal. Has name set to "AbortError".
import { AbortError } from "glove-core";
try {
await agent.processRequest("Hello", signal);
} catch (err) {
if (err instanceof AbortError) {
console.log("Request was aborted.");
}
}new AbortError(message?: string)
Interface for language model providers. Implement this to connect any LLM to Glove.
interface ModelAdapter {
name: string;
prompt(
request: PromptRequest,
notify: NotifySubscribersFunction,
signal?: AbortSignal
): Promise<ModelPromptResult>;
setSystemPrompt(systemPrompt: string): void;
}| Member | Type | Description |
|---|---|---|
| name | string | Display name of the model or provider. |
| prompt(request, notify, signal?) | Promise<ModelPromptResult> | Send messages and tools to the model. Call notify() to emit streaming events. Returns the complete response with token counts. |
| setSystemPrompt(systemPrompt) | void | Update the system prompt used for subsequent requests. |
| Property | Type | Description |
|---|---|---|
| messages | Message[] | The conversation messages to send to the model. |
| tools? | Tool<unknown>[] | Optional array of tools the model can invoke. |
| Property | Type | Description |
|---|---|---|
| messages | Message[] | Response messages from the model (typically one agent message). |
| tokens_in | number | Input tokens consumed by this prompt. |
| tokens_out | number | Output tokens generated by this prompt. |
Every model adapter emits events via the notify callback during prompting. These events are fully typed using a discriminated union so that subscribers (and custom adapter authors) get compile-time safety.
A discriminated union of all event shapes. Each variant has a type field plus event-specific data.
type SubscriberEvent =
| { type: "text_delta"; text: string }
| { type: "tool_use"; id: string; name: string; input: unknown }
| { type: "model_response"; text: string; tool_calls?: ToolCall[];
stop_reason?: string; tokens_in?: number; tokens_out?: number }
| { type: "model_response_complete"; text: string; tool_calls?: ToolCall[];
stop_reason?: string; tokens_in?: number; tokens_out?: number }
| { type: "tool_use_result"; tool_name: string; call_id?: string;
result: ToolResultData }
| { type: "compaction_start"; current_token_consumption: number }
| { type: "compaction_end"; current_token_consumption: number;
summary_message: Message }| Event | Emitted by | Description |
|---|---|---|
| text_delta | Model adapter (streaming) | Incremental text fragment from the model. Use to render streaming text in the UI. |
| tool_use | Model adapter (streaming) | The model is invoking a tool. Contains the tool id, name, and parsed input. |
| model_response | Model adapter (non-streaming) | Complete model response in non-streaming mode. Contains text, optional tool_calls, stop_reason, and token counts. |
| model_response_complete | Model adapter (streaming) | Final aggregated response after streaming finishes. Same shape as model_response. |
| tool_use_result | Core (PromptMachine) | Result of executing a tool. Includes tool_name, call_id, and the full ToolResultData. |
| compaction_start | Core (Context) | Conversation compaction is beginning. Contains current token consumption. |
| compaction_end | Core (Context) | Compaction finished. Contains the new token count and the summary message. |
A mapped type that extracts the data shape (everything except type) for each event. Use this when implementing a SubscriberAdapter or handling events in a switch statement.
type SubscriberEventDataMap = {
[E in SubscriberEvent as E["type"]]: Omit<E, "type">;
};
// Example: SubscriberEventDataMap["text_delta"] = { text: string }Interface for receiving events. Both the React hook subscriber and GloveVoice implement this. The record method is generic over the event type.
interface SubscriberAdapter {
record: <T extends SubscriberEvent["type"]>(
event_type: T,
data: SubscriberEventDataMap[T],
) => Promise<void>;
}When building a custom ModelAdapter, you must emit the correct events via the notify callback. Here is the minimal contract:
import type { ModelAdapter, NotifySubscribersFunction, PromptRequest, ModelPromptResult } from "glove-core";
class MyAdapter implements ModelAdapter {
name = "my-provider:model-name";
private systemPrompt?: string;
setSystemPrompt(systemPrompt: string) {
this.systemPrompt = systemPrompt;
}
async prompt(
request: PromptRequest,
notify: NotifySubscribersFunction,
signal?: AbortSignal,
): Promise<ModelPromptResult> {
// ... call your LLM API ...
// Non-streaming: emit a single model_response event
await notify("model_response", {
text: responseText,
tool_calls: toolCalls.length > 0 ? toolCalls : undefined,
stop_reason: finishReason ?? undefined,
tokens_in: usage.promptTokens,
tokens_out: usage.completionTokens,
});
return { messages: [message], tokens_in: ..., tokens_out: ... };
}
}For streaming adapters, emit events incrementally:
// During streaming — emit text fragments as they arrive
notify("text_delta", { text: chunk });
// When a tool call is fully assembled
await notify("tool_use", { id: toolCallId, name: toolName, input: parsedArgs });
// After the stream completes — emit the final aggregated response
await notify("model_response_complete", {
text: fullText,
tool_calls: toolCalls.length > 0 ? toolCalls : undefined,
stop_reason: finishReason ?? undefined,
});Key rules:
model_response (one event per prompt call).text_delta for each text chunk, tool_use for each completed tool call, and model_response_complete once at the end.stop_reason should be undefined (not null) when unavailable. Use ?? undefined to coerce provider SDK nulls.tool_use_result, compaction_start, and compaction_end are emitted by the framework — adapters should not emit these.Interface for conversation persistence. Implement this to store messages, token counts, tasks, and permissions in any backend.
interface StoreAdapter {
identifier: string;
getMessages(): Promise<Message[]>;
appendMessages(msgs: Message[]): Promise<void>;
getTokenCount(): Promise<number>;
addTokens(count: number): Promise<void>;
getTurnCount(): Promise<number>;
incrementTurn(): Promise<void>;
resetCounters(): Promise<void>;
// Optional:
getTasks?(): Promise<Task[]>;
addTasks?(tasks: Task[]): Promise<void>;
updateTask?(taskId: string, updates: Partial<Task>): Promise<void>;
getPermission?(toolName: string): Promise<PermissionStatus>;
setPermission?(toolName: string, status: PermissionStatus): Promise<void>;
}| Member | Type | Description |
|---|---|---|
| identifier | string | Unique identifier for the store instance (typically a session ID). |
| getMessages() | Promise<Message[]> | Retrieve all conversation messages. |
| appendMessages(msgs) | Promise<void> | Append messages to the history. |
| getTokenCount() | Promise<number> | Get the cumulative token count. |
| addTokens(count) | Promise<void> | Add to the cumulative token count. |
| getTurnCount() | Promise<number> | Get the current turn count. |
| incrementTurn() | Promise<void> | Increment the turn counter. |
| resetCounters() | Promise<void> | Reset token and turn counts to zero without deleting messages. Called during compaction to reset thresholds while preserving the full message history in the store. |
| getTasks?() | Promise<Task[]> | Retrieve all tasks. Optional. Enables the built-in task tool when present. |
| addTasks?(tasks) | Promise<void> | Add tasks. Optional. |
| updateTask?(taskId, updates) | Promise<void> | Update a task by ID. Optional. |
| getPermission?(toolName) | Promise<PermissionStatus> | Check permission status for a tool. Optional. |
| setPermission?(toolName, status) | Promise<void> | Set permission status for a tool. Optional. |
Interface for observing agent events. Subscribers receive streaming text deltas, tool invocations, tool results, and model response completions.
interface SubscriberAdapter {
record(event_type: string, data: any): Promise<void>;
}| Member | Type | Description |
|---|---|---|
| record(event_type, data) | Promise<void> | Called whenever an event occurs. The event_type string identifies the event, and data carries the payload. |
The following events are emitted by the system and received by subscribers via the record method.
| Event | Data Shape | Description |
|---|---|---|
| text_delta | { text: string } | A chunk of streaming text from the model. Emitted as the model generates tokens. |
| tool_use | { id: string; name: string; input: unknown } | A tool invocation has started. Contains the tool call ID, name, and input arguments. |
| tool_use_result | { tool_name: string; call_id?: string; result: ToolResult['result'] } | A tool has finished executing. Contains the tool name, call ID, and execution result. |
| model_response | { text: string; tool_calls: ToolCall[] } | A model turn is complete (non-streaming adapters). |
| model_response_complete | { text: string; tool_calls: ToolCall[] } | A model turn is complete (streaming adapters). Contains the full response text and any tool calls. |
| compaction_start | (none) | Context compaction has begun. Emitted by the Observer before the summarization model call. |
| compaction_end | (none) | Context compaction has finished. Emitted by the Observer after the summary is appended and counters are reset. |
Represents a single message in the conversation history.
interface Message {
sender: "user" | "agent";
id?: string;
text: string;
content?: ContentPart[];
tool_results?: ToolResult[];
tool_calls?: ToolCall[];
is_compaction?: boolean;
}| Property | Type | Description |
|---|---|---|
| sender | "user" | "agent" | Who sent the message. |
| id? | string | Optional unique identifier for the message. |
| text | string | The text content of the message. |
| content? | ContentPart[] | Optional multimodal content parts (images, documents, etc.). |
| tool_results? | ToolResult[] | Tool execution results attached to this message (agent messages responding to tool calls). |
| tool_calls? | ToolCall[] | Tool calls the model wants to execute (present in agent messages). |
| is_compaction? | boolean | When true, marks this message as a compaction summary. Context.getMessages() uses this flag to split the history at the last compaction point, so the model only sees messages from the most recent compaction onward. |
Represents a multimodal content element within a message.
interface ContentPart {
type: "text" | "image" | "video" | "document";
text?: string;
source?: {
type: string;
media_type: string;
data?: string;
url?: string;
};
}| Property | Type | Description |
|---|---|---|
| type | "text" | "image" | "video" | "document" | The type of content. |
| text? | string | Text content. Used when type is "text". |
| source? | object | Source information for binary content. Contains type, media_type, and either data (base64) or url. |
| source.type | string | Source type (e.g., "base64", "url"). |
| source.media_type | string | MIME type (e.g., "image/png", "application/pdf"). |
| source.data? | string | Base64-encoded content data. |
| source.url? | string | URL pointing to the content. |
The core tool interface used by the Executor. This is the runtime representation, distinct from ToolConfig in glove-react which adds the render property.
| Property | Type | Description |
|---|---|---|
| name | string | Unique tool name. |
| description | string | Description for the model. |
| input_schema | z.ZodType<I> | Zod schema for input validation and JSON Schema generation. |
| requiresPermission? | boolean | Whether the tool requires explicit permission before execution. |
| unAbortable? | boolean | When true, the tool runs to completion despite abort signals. Essential for tools that perform mutations the user has committed to (e.g. checkout, payment). |
| run(input: I, handOver?: HandOverFunction) | Promise<ToolResultData> | Execute the tool with validated input. Optional handOver function for delegation patterns. |
| Property | Type | Description |
|---|---|---|
| tool_name | string | Name of the tool to invoke. |
| input_args | unknown | Arguments to pass to the tool (validated against the tool's input schema at runtime). |
| id? | string | Optional call identifier for correlating calls with results. |
| Property | Type | Description |
|---|---|---|
| tool_name | string | Name of the tool that produced this result. |
| call_id? | string | Identifier correlating this result with its ToolCall. |
| result | ToolResultData | The execution result. See ToolResultData below. |
The shape of the result field on a ToolResult. Contains the data returned by the tool, a status indicator, an optional error message, and an optional client-only rendering payload.
interface ToolResultData {
status: "success" | "error";
data: unknown; // Sent to the AI model
message?: string; // Error message (for status: "error")
renderData?: unknown; // Client-only — NOT sent to model, used by renderResult
}| Property | Type | Description |
|---|---|---|
| status | "success" | "error" | Whether the tool executed successfully or encountered an error. |
| data | unknown | The tool's return value. This is the data sent to the AI model as the tool result. |
| message? | string | Error message describing what went wrong. Typically present when status is "error". |
| renderData? | unknown | Client-only data for rendering tool results from history. Model adapters explicitly strip this field before sending to the AI — safe for sensitive client-only data like email addresses or UI state. Used by the renderResult function in glove-react tools. |
Model adapters (Anthropic, OpenAI-compat) explicitly destructure and only send data, status, and message to the API. The renderData field is preserved in the message store for client-side rendering via renderResult but is never sent to the AI model.
Represents a tracked task in the agent's task list.
| Property | Type | Description |
|---|---|---|
| id | string | Unique identifier for the task. |
| content | string | Description of the task in imperative form (e.g., "Fix the login bug"). |
| activeForm | string | Present-continuous form shown during execution (e.g., "Fixing the login bug"). |
| status | TaskStatus | Current status: "pending", "in_progress", or "completed". |
type TaskStatus = "pending" | "in_progress" | "completed";type PermissionStatus = "granted" | "denied" | "unset";| Type | Signature | Description |
|---|---|---|
| NotifySubscribersFunction | <T extends SubscriberEvent['type']>(event_name: T, data: SubscriberEventDataMap[T]) => Promise<void> | Type-safe callback passed to ModelAdapter.prompt for emitting events to subscribers. |
| HandOverFunction | (input: unknown) => Promise<unknown> | Delegation callback passed to tool execution for handing control to another tool or system. |
| ListenerFn | (stack: Slot<unknown>[]) => Promise<void> | Display stack change listener. Called whenever the stack is modified. |
| UnsubscribeFn | () => void | Returned by subscribe() to remove a listener. |
| ResolverFn<RI> | (value: RI) => void | Internal resolver for pushAndWait promises. |
| RejectFn | (reason?: any) => void | Internal rejector for pushAndWait promises. |
Compaction is history-preserving. When triggered, the full conversation is summarized and the summary is appended as a new message with is_compaction: true. No messages are deleted from the store, so frontends can still display the complete history. The model only sees messages from the last compaction point onward, courtesy of splitAtLastCompaction() in Context.getMessages().
| Property | Type | Description |
|---|---|---|
| compaction_instructions | string | Instructions given to the model when summarizing the conversation. Required. |
| max_turns? | number | Maximum turns before compaction is triggered. |
| compaction_context_limit? | number | Maximum token count before compaction is triggered. |
The framework provides a built-in tool for task management. It is automatically registered when the store supports tasks (implements getTasks, addTasks, updateTask).
import { createTaskTool } from "glove-core";
const taskTool = createTaskTool(context);
// taskTool.name === "glove_update_tasks"function createTaskTool(context: Context): Tool<TaskToolInput>| Property | Type | Description |
|---|---|---|
| todos | Array<{ content: string; activeForm: string; status: TaskStatus }> | The complete task list. Each call replaces the entire list. |
The tool name is glove_update_tasks. The model calls it to create, update, or complete tasks. Each invocation sends the full current task list, enabling additions, status changes, and removals in a single call.
The glove-core/models/providers module exports factory functions for creating model adapters from supported providers.
import { createAdapter, getAvailableProviders } from "glove-core/models/providers";
const model = createAdapter({
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 4096,
stream: true,
});
const available = getAvailableProviders();
// [{ id: "openai", name: "OpenAI", ... }, ...]function createAdapter(opts: CreateAdapterOptions): ModelAdapter| Property | Type | Description |
|---|---|---|
| provider | string | Provider ID. One of: openai, anthropic, openrouter, gemini, minimax, kimi, glm, ollama, lmstudio, bedrock. |
| model? | string | Model name to use. Defaults to the provider's default model. |
| apiKey? | string | API key. Defaults to the provider's environment variable. |
| maxTokens? | number | Maximum output tokens. Defaults to the provider's default. |
| stream? | boolean | Whether to use streaming. Defaults to true. |
| baseURL? | string | Override the provider's default base URL (e.g., custom port for local LLMs). |
Returns an array of provider configurations that have API keys available in the current environment.
function getAvailableProviders(): Array<{ id: string; name: string; available: boolean; models: string[]; defaultModel: string }>| ID | Env Variable | Default Model |
|---|---|---|
| openai | OPENAI_API_KEY | gpt-4.1 |
| anthropic | ANTHROPIC_API_KEY | claude-sonnet-4-20250514 |
| openrouter | OPENROUTER_API_KEY | anthropic/claude-sonnet-4 |
| gemini | GEMINI_API_KEY | gemini-2.5-flash |
| minimax | MINIMAX_API_KEY | MiniMax-M2.5 |
| kimi | MOONSHOT_API_KEY | kimi-k2.5 |
| glm | ZHIPUAI_API_KEY | glm-4-plus |
| ollama | (none) | (user-specified) |
| lmstudio | (none) | (user-specified) |
| bedrock | AWS_ACCESS_KEY_ID | anthropic.claude-3-5-sonnet-20241022-v2:0 |
Each provider has properties: id, name, baseURL, envVar, defaultModel, models[], format (either "openai", "anthropic", or "bedrock"), defaultMaxTokens, and requiresApiKey.
Local providers (ollama and lmstudio) don't require an API key and have no default model — you must pass a model name. Use baseURL to override the default port if needed:
const model = createAdapter({
provider: "ollama",
model: "llama3",
baseURL: "http://localhost:9999/v1", // optional, defaults to :11434
});