Two kinds of API call are supported in the EnvoyResponse schema.
Call a registered Python callable by name, passing args as a dict. The orchestrator maintains a registry of named functions. The LLM picks a name and supplies arguments; the orchestrator dispatches and returns the result in the next iteration context.
Credentials, imports, and implementation details are encapsulated in the registered function — the LLM never sees them.
Call an HTTP endpoint described by an OpenAPI spec. The LLM specifies the endpoint, method, and parameters. The orchestrator executes the request and returns the response.
Each HTTP service has an associated spec URL (see Spec Discovery). The spec is fetched, converted to a call/response schema by a dedicated LLM call, and cached in notes (see Schema Cache).
Credentials are looked up from ~/.netrc by hostname — the same mechanism already used for IMAP.
Both types are expressed as a list in EnvoyResponse (always present, may be empty):
class PythonCall(EnvoyBaseModel):
fn: str # registered function name
args: dict # keyword arguments
class HttpCall(EnvoyBaseModel):
service: str # key into registered services (matches stub note)
endpoint: str # path, e.g. '/get'
method: str # GET, POST, etc.
params: dict # query params or body
# In EnvoyResponse:
python_calls: List[PythonCall] = Field(default_factory=list, ...)
http_calls: List[HttpCall] = Field(default_factory=list, ...)