reason

Run an LLM-powered reasoning pass over retrieved memories. Supports streaming via Server-Sent Events.

POST /v1/reason

Request parameters

ParameterTypeRequiredDescription
workspace_idstringREQWorkspace containing context memories.
querystringREQThe question or task for the reasoning engine.
streambooleanStream tokens via SSE. Default false.
top_kintegerMemories to inject into context. Default 8.
modelstringReasoning model. Default bxe-reason-1.

Streaming

When stream: true, the response is an SSE stream. Each event is a JSON delta:

event: delta
data: {"token":"Based"}

data: {"token":" on"}

data: {"token":" the"}

event: done
data: {"usage":{"prompt_tokens":312,"completion_tokens":87}}

Each reason call counts as one reasoning call against your plan’s monthly limit, regardless of token count.