Skip to main content
Toolkit & Framework

Anthropic compatible

Use Anthropic SDKs

Call Qwen models through Anthropic-compatible APIs. To migrate from Anthropic, update these parameters:
  • ANTHROPIC_API_KEY (or ANTHROPIC_AUTH_TOKEN): Your API key.
  • ANTHROPIC_BASE_URL: https://dashscope-intl.aliyuncs.com/apps/anthropic.
  • Model name (model): A supported Qwen model such as qwen3.6-plus. See Supported models.

Quick integration

import anthropic
import os

client = anthropic.Anthropic(
  api_key=os.getenv("ANTHROPIC_API_KEY"),
  base_url=os.getenv("ANTHROPIC_BASE_URL"),
)
# Set ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL. See Compatibility details for parameter support.
message = client.messages.create(
  model="qwen3.6-plus",   # Set the model to qwen-plus
  max_tokens=1024,
  # Thinking mode is supported by some models only (see Supported models).
  thinking={
    "type": "enabled",
    "budget_tokens": 1024
  },
  # Streaming output
  stream=True,
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Who are you?"
        }
      ]
    }
  ]
)
print("=== Thinking Process ===")
first_text = True
for chunk in message:
  if chunk.type == "content_block_delta":
    if hasattr(chunk.delta, 'thinking'):
      print(chunk.delta.thinking, end="", flush=True)
    elif hasattr(chunk.delta, 'text'):
      if first_text:
        print("\n\n=== Answer ===")
        first_text = False
      print(chunk.delta.text, end="", flush=True)

Supported models

Supported Qwen models:
SeriesModel name (model)
Qwen-Max (some support thinking)qwen3-max, qwen3-max-2026-01-23 (supports thinking mode), qwen3-max-preview (supports thinking mode)
Qwen-Plusqwen3.6-plus, qwen3.6-plus-2026-04-02, qwen3.5-plus, qwen3.5-plus-2026-02-15, qwen-plus, qwen-plus-latest, qwen-plus-2025-09-11
Qwen-Flashqwen-flash, qwen-flash-2025-07-28
Qwen-Turboqwen-turbo, qwen-turbo-latest
Qwen-Coder (thinking not supported)qwen3-coder-next, qwen3-coder-plus, qwen3-coder-plus-2025-09-23, qwen3-coder-flash
Qwen-VL (thinking not supported)qwen3-vl-plus, qwen3-vl-flash, qwen-vl-max, qwen-vl-plus
For model parameters and billing, see Models.

Configure environment variables

  1. Sign in to Qwen Cloud.
  2. Set these environment variables:
    • ANTHROPIC_BASE_URL: https://dashscope-intl.aliyuncs.com/apps/anthropic.
    • ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN: Your Qwen Cloud API key.
Either ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN works -- set one. This guide uses ANTHROPIC_API_KEY.
  • macOS
  • Windows
  1. Check your default shell type.
echo $SHELL
  1. Set environment variables for your shell:
# Replace YOUR_DASHSCOPE_API_KEY with your Qwen Cloud API Key.
echo 'export ANTHROPIC_BASE_URL="https://dashscope-intl.aliyuncs.com/apps/anthropic"' >> ~/.zshrc
echo 'export ANTHROPIC_API_KEY="YOUR_DASHSCOPE_API_KEY"' >> ~/.zshrc
  1. Apply the environment variables.
source ~/.zshrc
  1. Open a new terminal and verify the environment variables.
echo $ANTHROPIC_BASE_URL
echo $ANTHROPIC_API_KEY

Call the API

  • curl
  • Python
  • TypeScript
curl -X POST "https://dashscope-intl.aliyuncs.com/apps/anthropic/v1/messages" \
  -H "Content-Type: application/json" \
  -H "x-api-key: ${ANTHROPIC_API_KEY}" \
  -d '{
    "model": "qwen3.6-plus",
    "max_tokens": 1024,
    "stream": true,
    "thinking": {
      "type": "enabled",
      "budget_tokens": 1024
    },
    "system": "You are a helpful assistant",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Who are you?"
                }
            ]
        }
    ]
}'

Compatibility details

HTTP header

FieldSupported
x-api-keySupported
Authorization BearerSupported
anthropic-beta/anthropic-versionNot supported

Basic fields

FieldSupportedDescriptionExample
modelSupportedModel name. See Supported models.qwen-plus
max_tokensSupportedMaximum tokens to generate.1024
containerNot supported--
mcp_serversNot supported--
metadataNot supported--
service_tierNot supported--
stop_sequencesSupportedCustom stop sequences for generation.["}"]
streamSupportedStreaming output.True
systemSupportedSystem prompt.You are a helpful assistant
temperatureSupportedControls text diversity.1.0
thinkingSupportedEnables reasoning before responding. Supported by some models only (see Supported models).{"type": "enabled", "budget_tokens": 1024}
top_kSupportedSampling candidate set size.10
top_pSupportedNucleus sampling probability threshold. Controls text diversity.0.1
Set only one of temperature or top_p. Both control text diversity. See Text generation model overview.

Tool fields

tools

FieldSupported
nameSupported
input_schemaSupported
descriptionSupported
cache_controlSupported

tool_choice

ValueSupported
noneSupported
autoSupported
anySupported
toolSupported

Message fields

FieldTypeSubfieldSupportedDescription
contentstring-SupportedText content.
array, type="text"textSupportedText block content.
cache_controlSupportedCache control for this text block.
citationsNot supported-
array, type="image"sourceSupportedImage source (base64 or URL).
array, type="video"-Not supported-
array, type="document"-Not supported-
array, type="search_result"-Not supported-
array, type="thinking"-Not supported-
array, type="redacted_thinking"-Not supported-
array, type="tool_use"idSupportedTool call identifier.
inputSupportedTool parameters.
nameSupportedTool name.
cache_controlSupportedCache control for this tool call.
array, type="tool_result"tool_use_idSupportedCorresponding tool_use ID.
contentSupportedTool result (string or JSON string).
cache_controlSupportedCache control for this tool result.
is_errorNot supported-
array, type="server_tool_use"-Not supported-
array, type="web_search_tool_result"-Not supported-
array, type="code_execution_tool_result"-Not supported-
array, type="mcp_tool_use"-Not supported-
array, type="mcp_tool_result"-Not supported-
array, type="container_upload"-Not supported-