Design for allowing the LLM to make external API calls as part of task execution. Not yet implemented.
• Types — Python (registered callable) and HTTP/REST (OpenAPI-described service)
• LLM Flow — iteration sequence; 3 LLM calls first use, 1 call when schema cached
• Spec Discovery — how the OpenAPI spec URL is found (explicit → infer from /docs → guess common paths → fail)
• Schema Cache — HEAD/ETag freshness check; cached as YAML in service stub note
• Registration — CONTENTS entry + stub note; first integration target is httpbin
• Credentials via ~/.netrc by hostname — no credentials in schema or LLM context
• Pagination deferred — add later; LLM will specify how many results it needs
• Schema stored as YAML for human readability
• LLM can flag a stale/wrong cached schema, triggering re-generation