---
title: AI Pricing API
description: Public JSON API documentation for current AI model provider pricing.
doc_version: "1.0"
last_updated: "2026-05-07"
---

# AI Pricing API

Read-only HTTP API for current pricing across the providers we track. No
authentication required. Responses are JSON. Rate-limited per IP per
endpoint group (60 req / 60 s on `/v1/prices/*`, 120 req / 60 s on the rest).

**Base URL:** `https://ai-pricing.fyi`

**Get this doc as Markdown:** `curl -H 'Accept: text/markdown' https://ai-pricing.fyi/api`

**OpenAPI schema:** `https://ai-pricing.fyi/openapi.json`

---

## Contents

- [Providers](#providers) — who hosts models
- [Models](#models) — canonical catalog of every model we have data for
- [Prices](#prices) — current per-token / per-call rates
- [Common queries](#common-queries) — recipes for typical questions
- [Sitemap](#sitemap) — related machine-readable resources

---

## Providers

A provider is an inference platform we scrape (Anthropic, OpenAI, Cloudflare
Workers AI, Fireworks AI, Together AI). The same model can be hosted by
multiple providers — each provider's offer points at the same canonical
model row, so prices stack on the model detail page.

### `GET /v1/providers`

List every provider we track. Pass `active=true` to exclude any future
deactivated ones.

| Param | Type | Description |
|---|---|---|
| `active` | boolean | `true` \| `false`. Omit to return all. |
| `limit` | integer | Default 50. |
| `offset` | integer | Default 0. |

```bash
curl https://ai-pricing.fyi/v1/providers
```

```json
{
  "data": [
    {
      "id": 2,
      "slug": "anthropic",
      "name": "Anthropic",
      "website_url": "https://anthropic.com",
      "docs_url": "https://docs.anthropic.com",
      "pricing_url": "https://platform.claude.com/docs/en/about-claude/pricing",
      "icon_url": "https://www.google.com/s2/favicons?domain=anthropic.com&sz=64",
      "active": true,
      "notes": null,
      "created_at": "2026-04-22 19:08:49",
      "updated_at": "2026-04-22 19:08:49"
    }
  ],
  "limit": 50,
  "offset": 0
}
```

### `GET /v1/providers/:slug`

Detail for one provider, by its slug (e.g. `anthropic`, `openai`,
`cloudflare`, `fireworks`, `together`).

```bash
curl https://ai-pricing.fyi/v1/providers/anthropic
```

---

## Models

One row per canonical model in our catalog. `vendor` is the canonical model
creator/vendor (often the original creator), not necessarily the serving
provider. `family` is a coarse grouping like `opus`, `sonnet`, `gpt`,
`kimi`. `status` is one of `active`, `deprecated` (provider has announced
a sunset date), or `retired` (no longer offered).

### `GET /v1/models`

List models in the catalog. Combine `vendor` + `family` to drill down by
canonical creator, or use `provider` + `deployment_mode` to find what a
serving provider offers.

| Param | Type | Description |
|---|---|---|
| `vendor` | string | Canonical model creator/vendor slug, not the serving provider. Example: `anthropic` returns Anthropic-created models, including models served through other providers. |
| `provider` | string | Serving provider slug. Example: `cloudflare` returns models Cloudflare can serve, including proxied AI Gateway catalog entries. |
| `deployment_mode` | string | Filter by serving mode such as `hosted`, `proxied`, or `shared_serverless`. |
| `family` | string | `opus`, `sonnet`, `haiku`, `gpt`, `kimi`, `glm`, … |
| `open_weights` | boolean | `true` \| `false` |
| `model_type` | string | Filter by canonical model type. Comma-list for OR. Values: `chat`, `reasoning`, `code`, `embedding`, `multimodal_chat`, `image_generation`, `audio_transcription`, `audio_generation`, `video_generation`, `ocr`, `moderation`. |
| `input_modality` | string | Filter to models accepting this modality. Repeat to AND. Values: `text`, `image`, `audio`, `video`. |
| `output_modality` | string | Filter to models producing this modality. Repeat to AND. Values: `text`, `image`, `audio`, `video`, `vector`. |
| `capability` | string | Filter to models with this capability flag. Repeat to AND. Values: `tool_use`, `json_mode`, `thinking_mode`, `web_search`, `code_execution`, `file_attachments`. |
| `model_group` | string | Filter to canonical models whose `model_group_key` equals the given value (e.g. `meta/llama-3.3-70b-instruct`). Use this to find every per-provider variant of the same underlying model. |
| `modality` | string | **(DEPRECATED — alias for `input_modality`)** When set on the model row. |
| `limit` | integer | Default 50. Pass 1000 to get the full catalog. |
| `offset` | integer | Default 0. |

Response rows include a `model_group_key` field (nullable string). When non-null,
it groups together canonical rows that represent the same underlying model
hosted under different per-provider slugs (e.g. Groq's
`llama-3.3-70b-versatile` and Cloudflare's `llama-3.3-70b-instruct-fp8-fast`
both share the group key `meta/llama-3.3-70b-instruct`).

Rows also include `max_output_tokens` (nullable integer) when the provider
publishes a separate max-output limit distinct from `context_window`. Only
providers whose pricing / detail pages expose this number populate it;
expect `null` on older or scraper-less providers.

```bash
curl "https://ai-pricing.fyi/v1/models?vendor=anthropic&limit=2"
curl "https://ai-pricing.fyi/v1/models?provider=cloudflare&deployment_mode=proxied&limit=20"
```

```json
{
  "data": [
    {
      "id": 52,
      "canonical_slug": "claude-haiku-3",
      "vendor": "anthropic",
      "family": "haiku",
      "model_name": "claude-haiku-3",
      "display_name": "Claude Haiku 3",
      "status": "active",
      "open_weights": false,
      "context_window": null,
      "max_output_tokens": null,
      "params_billions": null,
      "model_type": "chat",
      "input_modalities": ["text"],
      "output_modalities": ["text"],
      "capabilities": ["tool_use", "json_mode"],
      "model_group_key": "anthropic/claude-haiku-3",
      "created_at": "2026-04-23 01:15:25",
      "updated_at": "2026-04-23 01:15:25"
    }
  ],
  "limit": 2,
  "offset": 0
}
```

### `GET /v1/models/:key`

Detail for one model. Includes any aliases that resolve to the same
canonical row.

The `:key` path segment accepts either a canonical slug
(`llama-3.3-70b-versatile`) or a model group key
(`meta/llama-3.3-70b-instruct`):

- Given a **canonical slug** whose row has a non-null `model_group_key`,
  the endpoint returns `301 Moved Permanently` with
  `Location: /v1/models/<group_key>`. Clients that follow redirects land
  transparently on the group view.
- Given a **group key**, returns the representative row (the first
  canonical model alphabetically that shares that group key) plus
  aggregate capability data across the group.
- `404` when the value matches neither a canonical slug nor a group key.

```bash
curl https://ai-pricing.fyi/v1/models/claude-opus-4.7
curl -L https://ai-pricing.fyi/v1/models/meta/llama-3.3-70b-instruct
curl "https://ai-pricing.fyi/v1/models/claude-opus-4.7/offers"
```

Cloudflare AI Gateway proxied offers are availability rows. They appear in
`/v1/models/:canonicalSlug/offers` with `pricing_status: "proxied"` and do
not appear in `/v1/prices/current`.

---

## Prices

Current scraped prices, one row per (offer × dimension). One model can
produce many rows: one per provider that hosts it, one per tier (standard /
batch), one per metric (`input` / `cached_input` / `cache_write_5m` /
`cache_write_1h` / `output`). Filter aggressively.

### `GET /v1/prices/current`

The main endpoint. Returns the latest known numeric price for every active
priced offer matching the filters.

| Param | Type | Description |
|---|---|---|
| `provider` | string | Provider slug (`anthropic`, `openai`, `cloudflare`, `fireworks`, `together`). |
| `model` | string | Canonical slug (e.g. `claude-opus-4.7`, `gpt-5.4`, `kimi-k2.5`). Returns rows from all providers hosting this model. |
| `family` | string | `opus`, `sonnet`, `gpt`, `kimi`, … |
| `model_type` | string | Filter by canonical model type. Comma-list for OR. Values: `chat`, `reasoning`, `code`, `embedding`, `multimodal_chat`, `image_generation`, `audio_transcription`, `audio_generation`, `video_generation`, `ocr`, `moderation`. |
| `input_modality` | string | Filter to models accepting this modality. Repeat to AND. Values: `text`, `image`, `audio`, `video`. |
| `output_modality` | string | Filter to models producing this modality. Repeat to AND. Values: `text`, `image`, `audio`, `video`, `vector`. |
| `capability` | string | Filter to models with this capability flag. Repeat to AND. Values: `tool_use`, `json_mode`, `thinking_mode`, `web_search`, `code_execution`, `file_attachments`. |
| `model_group` | string | Filter price rows whose canonical model has the given `model_group_key` (e.g. `meta/llama-3.3-70b-instruct`). This is the canonical way to compare prices for the same underlying model across providers. |
| `metric` | string | `input_token`, `cached_input_token`, `output_token`, `cache_write_5m_token`, `cache_write_1h_token`. |
| `billing_basis` | string | `per_token`, `per_usage`. |
| `billing_tier` | string | `standard`, `batch`, `flex`, `priority`, `turbo`. Matches the suffix on each row's `provider_offer_key`. |
| `region` | string | AWS/GCP region for region-priced offers. |
| `limit` | integer | Default 100. Max useful ~2000. |
| `offset` | integer | Default 0. |

```bash
curl "https://ai-pricing.fyi/v1/prices/current?model=claude-opus-4.7&limit=1"
```

```json
{
  "data": [
    {
      "offer_id": 176,
      "provider_slug": "anthropic",
      "provider_name": "Anthropic",
      "canonical_slug": "claude-opus-4.7",
      "display_name": "Claude Opus 4.7",
      "model_name": "claude-opus-4.7",
      "family": "opus",
      "vendor": "anthropic",
      "open_weights": false,
      "model_group_key": null,
      "provider_offer_key": "anthropic_claude-opus-4.7_standard",
      "offer_category": "model_api",
      "deployment_mode": "shared_serverless",
      "billing_basis": "per_token",
      "metric": "input_token",
      "unit": "per_1m_tokens",
      "currency": "USD",
      "batch_flag": 0,
      "price_numeric": 5,
      "latest_observed_at": "2026-04-24T00:14:27.018Z",
      "region": null
    }
  ]
}
```

Response rows include a `model_group_key` field (nullable string) that
matches the column on `/v1/models`. Use it together with the
`model_group` query param to fetch every hosting provider's offer for the
same underlying model:

```bash
curl "https://ai-pricing.fyi/v1/prices/current?model_group=meta/llama-3.3-70b-instruct"
```

Returns all hosting providers' offers for Meta's Llama 3.3 70B Instruct.

### `GET /v1/prices/filters`

Discover what values are available for the filter parameters above. Useful
for building dropdowns.

```bash
curl https://ai-pricing.fyi/v1/prices/filters
```

```json
{
  "providers": ["anthropic", "cloudflare", "fireworks", "openai", "together"],
  "families": ["opus", "sonnet", "haiku", "gpt", "kimi", "..."],
  "model_types": ["chat", "reasoning", "code", "embedding", "..."],
  "input_modalities": ["text", "image", "audio", "video"],
  "output_modalities": ["text", "image", "audio", "video", "vector"],
  "capabilities": ["tool_use", "json_mode", "thinking_mode", "web_search", "code_execution", "file_attachments"],
  "metrics": ["input_token", "cached_input_token", "output_token", "..."],
  "billing_basis": ["per_token"],
  "billing_tiers": ["standard", "batch", "flex", "priority", "turbo"],
  "comparable_classes": ["..."]
}
```

---

## Common queries

### All providers we have data for

```bash
curl https://ai-pricing.fyi/v1/providers
```

### All models for a single provider (active only)

```bash
curl "https://ai-pricing.fyi/v1/models?provider=fireworks&limit=1000"
```

Filter the response by `status === "active"` if you want to hide
deprecated/retired entries.

### All current prices for one model across every provider hosting it

```bash
curl "https://ai-pricing.fyi/v1/prices/current?model=gpt-oss-120b&limit=100"
```

For shared open-weight models you'll get rows from multiple providers in
one call — the `provider_slug` field tells you which is which.

### Cheapest input-token price for every Anthropic model right now

```bash
curl "https://ai-pricing.fyi/v1/prices/current?provider=anthropic&metric=input_token&billing_tier=standard&limit=200"
```

### Compare batch vs standard pricing for one model

```bash
curl "https://ai-pricing.fyi/v1/prices/current?model=claude-opus-4.7&limit=20"
```

Group the response client-side by `provider_offer_key` — keys ending in
`_standard` vs `_batch` are the two tiers.

## Sitemap

See the full [sitemap](/sitemap.md) for all public pages and
machine-readable resources.
