Docs
Overview
Overview
Start here for Vocifer AI hosted inference — OpenAI-compatible HTTPS APIs and a queryable models catalog.
Vocifer AI serves OpenAI-compatible inference: one Bearer token, familiar /v1 paths, and JSON shaped like OpenAI’s Chat Completions and models list. Production traffic is typically routed to a dedicated host such as https://inf.vocifer.com (your onboarding email may assign a regional hostname).
Use the short guides below in order, or jump to what you need:
- Getting started — base URL, environments, and the
/v1prefix - Authentication — API keys, optional org header, errors
- Models catalog —
GET /v1/modelsand what the response is for - Chat completions — non-streaming and streaming
curlexamples - SDKs & clients — Python, TypeScript, Go, and third-party stacks
Exact SKUs, hosts, and headers are confirmed when your workspace is provisioned. For API access, use Request API access on the main site.