reason
Run an LLM-powered reasoning pass over retrieved memories. Supports streaming via Server-Sent Events.
POST /v1/reason
Request parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| workspace_id | string | REQ | Workspace containing context memories. |
| query | string | REQ | The question or task for the reasoning engine. |
| stream | boolean | Stream tokens via SSE. Default false. | |
| top_k | integer | Memories to inject into context. Default 8. | |
| model | string | Reasoning model. Default bxe-reason-1. |
Streaming
When stream: true, the response is an SSE stream. Each event is a JSON delta:
event: delta
data: {"token":"Based"}
data: {"token":" on"}
data: {"token":" the"}
event: done
data: {"usage":{"prompt_tokens":312,"completion_tokens":87}}
Each reason call counts as one reasoning call against your plan’s monthly limit, regardless of token count.