AI models for text, vision, speech, and image & video generation.
Developer quickstart
Make your first API request in minutes. Seamlessly integrate with any OpenAI SDK or client.
Get started →Models
Start with qwen3.6-plus for complex reasoning and coding. Choose qwen3.5-flash for speed and cost efficiency. All models share the same API — just change the model parameter. Browse all models →
qwen3.6-plus
Complex reasoning, coding, agents — 1M context
qwen3.5-flash
Fast and cost-effective — 1M context
qwen3-max
Complex reasoning and coding
Start building
Read and generate text
Prompt models to generate text, summarize, translate, or write code
Understand images and video
Analyze images, extract text from screenshots, or reproduce designs from mockups
Generate images
Create and edit images from text prompts with Wan and Flux models
Generate videos
Animate images into video clips or generate videos from text descriptions
Synthesize speech
Convert text to natural speech with built-in voices, voice cloning, or voice design
Build agentic applications
Connect models to external tools and APIs with function calling
Tackle complex tasks with thinking
Use reasoning models to solve multi-step math, logic, and coding problems
Get structured data from models
Extract JSON that conforms to a schema from any model response
Pricing | API Reference | Free Quota