AI API (OpenAI-Compatible)
OpenAI-compatible chat completions and model listing endpoints for the AACFlow AI API
AACFlow exposes an OpenAI-compatible AI API at /api/v1/ai. Any client or SDK that
targets the OpenAI API can be pointed at AACFlow by changing the base URL and
swapping the API key.
All AI API requests deduct from your AACFlow credit balance. A small deposit is reserved at the start of each request and reconciled against the actual model cost once the response is complete.
Authentication
Pass your AACFlow API key in either the x-api-key header (native) or the
Authorization: Bearer <key> header (OpenAI SDK compatible).
Authorization: Bearer aacf_your_api_key_hereBase URL
https://www.aacflow.io/api/v1/aiFor local development:
http://localhost:4000/api/v1/aiChat Completions
POST /api/v1/ai/chat/completions
Generate a model response for a conversation. Request and response schemas are compatible with the OpenAI Chat Completions API.
Request body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model identifier, e.g. gpt-4o-mini, claude-sonnet-4-6 |
messages | Message[] | Yes | Conversation history. At least one message required |
temperature | number | No | Sampling temperature 0–2 |
max_tokens | integer | No | Maximum tokens to generate |
stream | boolean | No | Stream response as SSE (default false) |
top_p | number | No | Nucleus sampling |
stop | string | string[] | No | Stop sequences |
tools | Tool[] | No | Function-calling tools |
tool_choice | string | object | No | "auto", "none", "required", or named function |
response_format | object | No | \{ "type": "json_object" \} or JSON schema |
Response headers
| Header | Description |
|---|---|
X-Request-Id | Unique request identifier for support |
X-Credits-Used | Credits deducted for this request (USD, 6 decimal places) |
X-Credits-Remaining | Credit balance after this request |
Example — non-streaming
curl -X POST https://www.aacflow.io/api/v1/ai/chat/completions \
-H "Authorization: Bearer aacf_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{ "role": "user", "content": "Summarise the top 3 risks in this contract." }
],
"temperature": 0.2
}'from openai import OpenAI
client = OpenAI(
api_key="aacf_your_key",
base_url="https://www.aacflow.io/api/v1/ai",
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import OpenAI from 'openai'
const client = new OpenAI({
apiKey: 'aacf_your_key',
baseURL: 'https://www.aacflow.io/api/v1/ai',
})
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
})
console.log(response.choices[0].message.content)Example response
{
"id": "chatcmpl-a1b2c3d4e5f6",
"object": "chat.completion",
"created": 1720000000,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}Streaming
Set "stream": true to receive a text/event-stream response. Each chunk is a
data: {...} SSE line with a partial ChatCompletionChunk object, terminated by
data: [DONE].
curl -X POST https://www.aacflow.io/api/v1/ai/chat/completions \
-H "Authorization: Bearer aacf_your_key" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Count to 5"}],"stream":true}'List Models
GET /api/v1/ai/models
Returns the list of model identifiers available through the AI API.
curl https://www.aacflow.io/api/v1/ai/models \
-H "Authorization: Bearer aacf_your_key"Error responses
All errors follow the OpenAI error envelope format:
{
"error": {
"message": "Human-readable description",
"type": "invalid_request_error",
"code": "model_not_found"
}
}| HTTP status | type | code | Cause |
|---|---|---|---|
| 401 | invalid_request_error | invalid_api_key | Missing or invalid API key |
| 400 | invalid_request_error | — | Malformed request body |
| 402 | insufficient_quota | insufficient_credits | Credit balance too low |
| 404 | invalid_request_error | model_not_found | Model not supported |
| 500 | server_error | — | Internal error |
Supported models
AACFlow routes requests to the underlying provider based on the model identifier. Any model listed on the Models page can be used here. The identifier is the same slug used elsewhere in the platform.
Not all models support all features (e.g. tool calling, structured output). Capabilities are documented on the individual model pages.
Using with LangChain / LlamaIndex
Point the OpenAI-compatible integration at the AACFlow base URL:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="claude-sonnet-4-6",
openai_api_key="aacf_your_key",
openai_api_base="https://www.aacflow.io/api/v1/ai",
)from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="gpt-4o-mini",
api_key="aacf_your_key",
api_base="https://www.aacflow.io/api/v1/ai",
)
