Low-code LLM app platform
Dify is a low-code platform for building LLM applications with visual workflows. Connect it to Qwen Cloud's pay-as-you-go API to create chat assistants, agents, and knowledge bases powered by Qwen models.
Get running in a few minutes:
You should see: The model responds with Python code
Configure Dify to use Qwen Cloud:
"Invalid API-key provided" error
For models that support reasoning:
Extract reasoning from responses:
Quick start
Get running in a few minutes:
Configuration
Basic setup
Configure Dify to use Qwen Cloud:
- Plugin: TONGYI (from Dify marketplace)
- API endpoint: International (set to "Yes")
- Authentication: API key required
- Model selection: Choose from available models in plugin
Free quota and billing:
- First-time users get a free quota (valid for 90 days)
- Enable Free quota only to prevent unexpected charges
Step-by-step configuration
1
Install TONGYI plugin
Go to Dify marketplace → Models → TONGYI → Install
2
Configure API key
Click profile → Settings → Model Providers → TONGYI → Settings
- API Key: Your API key
- Use international endpoint: Yes
3
Enable models
Click models on TONGYI card → Toggle on desired models
For newest models not in TONGYI plugin: Use OpenAI-API-compatible plugin with endpoint
https://dashscope-intl.aliyuncs.com/compatible-mode/v1Limitations
- Plugin maintenance: TONGYI plugin is maintained by Dify, not Qwen Cloud
- Model availability: Some newest models may require OpenAI-compatible plugin
Examples
- Chat assistant
- Workflow with LLM node
- Knowledge base
- Vision models
1
Create app
Workspace → Create blank app → Chat assistant
2
Configure model
Select qwen3.5-plus → Enable thinking mode if available
3
Test conversation
Type: "Explain how neural networks work"
Troubleshooting
"Invalid API-key provided" error
Solution:Models with
- Try earlier TONGYI plugin version
- Use API key from default workspace (not sub-workspace)
- Verify "Use international endpoint" is set to Yes
-latest suffix not available
Solution: Use OpenAI-API-compatible plugin with:High token consumption
- Endpoint:
https://dashscope-intl.aliyuncs.com/compatible-mode/v1- API key: Your DashScope key
- Model: Enter model ID manually
Solution:Vision toggle not appearing
- Use appropriate models for tasks
- Configure reasonable context windows
- Clear conversation history regularly
Solution: Ensure you've selected a vision-capable model (qwen3.5-plusorqwen3-vl-plus)
Advanced features
Thinking mode
For models that support reasoning:
- Select model with thinking support
- Enable thinking mode toggle
- Set to "True" for step-by-step reasoning
Code execution nodes
Extract reasoning from responses:
- Use regex in code execution nodes
- Separate thinking process from final answer
- Format output as needed
Related resources
- Models: Available models →
- Vision models: Image understanding guide →
- Embeddings: Text embedding models →
- API docs: OpenAI-compatible reference →